CN113781350B - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN113781350B
CN113781350B CN202111084966.4A CN202111084966A CN113781350B CN 113781350 B CN113781350 B CN 113781350B CN 202111084966 A CN202111084966 A CN 202111084966A CN 113781350 B CN113781350 B CN 113781350B
Authority
CN
China
Prior art keywords
pixel
image
color
value
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111084966.4A
Other languages
Chinese (zh)
Other versions
CN113781350A (en
Inventor
邓宇帆
李小涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111084966.4A priority Critical patent/CN113781350B/en
Publication of CN113781350A publication Critical patent/CN113781350A/en
Application granted granted Critical
Publication of CN113781350B publication Critical patent/CN113781350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium. The image processing method comprises the following steps: when the color of the current pixel in the image to be processed is not the target color, determining a pixel variance total value of a local image taking the current pixel as a center; when the total pixel variance value is greater than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining a pixel value of a target color of the position of the current pixel according to a preset neural network; respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color; and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color. The image processing method, the device, the electronic equipment and the storage medium provide a foundation for multi-channel color reproduction.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
In the related art, a camera includes an optical filter array composed of a plurality of optical filters of a plurality of colors and a photosensitive pixel array composed of a plurality of photosensitive pixels, one optical filter corresponds to each photosensitive pixel, external light can be irradiated on the photosensitive pixel through the optical filter, the photosensitive pixel can convert a received optical signal into an electrical signal and output the electrical signal, and then the output electrical signal is processed by a series of algorithms to obtain a target image. However, since the arrangement modes of the filter arrays are various, for example, a bayer array, a quadbayer array, an RGBW array, etc., the processing algorithms corresponding to the filter arrays of different arrangement modes may be different, and thus, it is necessary to design the corresponding processing algorithms for the filter arrays of different arrangement modes.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a storage medium.
The image processing method of the embodiment of the application comprises the following steps: when the color of the current pixel in the image to be processed is not the target color, determining the total pixel variance value of a local image taking the current pixel as the center; when the pixel variance total value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining a pixel value of the target color of the position of the current pixel according to a preset neural network; respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color; and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
The image processing device of the embodiment of the application comprises a first determining module, a second determining module, a processing module and a third determining module. The first determining module is used for determining a pixel variance total value of a local image taking the current pixel as a center when the color of the current pixel in the image to be processed is not a target color. The second determining module is configured to determine, according to a preset neural network, a pixel value of the target color at a position where the current pixel is located when the total pixel variance value is greater than or equal to a preset threshold and a texture direction of the local image is a first direction. The processing module is used for respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color. And the third determining module is used for determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
An electronic device of an embodiment of the application includes one or more processors and memory. The memory stores a computer program. The steps of the image processing method described in the above embodiment are implemented when the computer program is executed by the processor.
The computer-readable storage medium according to an embodiment of the present application has a computer program stored thereon, and is characterized in that the steps of the image processing method according to the above embodiment are realized when the program is executed by a processor.
According to the image processing method, the image processing device, the electronic equipment and the storage medium, the pixel value interpolation of the target color is carried out on the position where the pixel which is not the target color is located, so that the first full-size image with the target color can be obtained, the second full-size image with the preset color can be obtained according to the first full-size image and the image to be processed, the demosaicing effect is achieved, the resolving power of the image of each color channel in the full-size mode is improved, and a foundation is provided for color restoration of multiple channels.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of an image processing method according to an embodiment of the present application;
Fig. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an electronic device according to an embodiment of the application;
FIG. 4 is a schematic diagram of a filter set of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a preset direction of an image processing method according to an embodiment of the present application;
fig. 6 to 9 are flowcharts of an image processing method according to an embodiment of the present application;
fig. 10 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
fig. 11 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 12 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
fig. 13 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 14 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 15 is a schematic view of a pixel block of an image to be processed of an image processing method of an embodiment of the present application;
fig. 16 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a convolutional neural network of an image processing method of an embodiment of the present application;
fig. 18 is a flowchart of an image processing method according to an embodiment of the present application;
Fig. 19 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
fig. 20 is a flowchart of an image processing method according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, and are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present application and are not to be construed as limiting the present application.
In the description of embodiments of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
Referring to fig. 1 to 3, an image processing method according to an embodiment of the present application includes:
011: when the color of the current pixel in the image to be processed is not the target color, determining a pixel variance total value of a local image taking the current pixel as a center;
013: when the total pixel variance value is greater than or equal to a preset threshold value and the texture direction of the local image is the first direction, determining the pixel value of the target color of the position of the current pixel according to a preset neural network;
015: respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color;
017: and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
The image processing method of the embodiment of the present application may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the image processing apparatus 100 includes a first determination module 11, a second determination module 13, a processing module 15, and a third determination module 17. The first determining module 11 is configured to determine a total pixel variance value of a local image centered on a current pixel when the color of the current pixel in the image to be processed is not the target color. The second determining module 13 is configured to determine, according to a preset neural network, a pixel value of a target color at a location where the current pixel is located when the total pixel variance value is greater than or equal to a preset threshold and the texture direction of the partial image is the first direction. The processing module 15 is configured to treat each pixel in the image to be processed as a current pixel and process the current pixel to obtain a first full-size image with a target color. The third determining module 17 is configured to determine a second full size image with a preset color according to the first full size image and the image to be processed, where the preset color is different from the target color.
The image processing method of the embodiment of the present application may be implemented by the electronic apparatus 1000 of the embodiment of the present application. In particular, the electronic device 1000 of an embodiment of the application includes one or more processors 200 and memory 300. The memory 300 stores a computer program. When the computer program is executed by the processor 200, the steps 011, 013, 015, and 017 of the image processing method according to the embodiment of the present application are implemented.
In the above image processing method, the image processing apparatus 100 and the electronic device 1000, by interpolating the pixel value of the target color at the position where the pixel of the target color is not, the first full-size image with the target color can be obtained, and further the second full-size image with the preset color can be obtained according to the first full-size image and the image to be processed, thereby realizing the demosaicing effect, improving the resolving power of the image of each color channel in the full-size mode, and providing a foundation for color restoration of multiple channels.
In particular, in some embodiments, an electronic device includes a housing and a camera module, the housing and the camera module being combined. The camera module includes an image sensor including a photosensitive pixel array and a filter array. The photosensitive pixel array includes a plurality of photosensitive pixels. The filter array comprises a plurality of filter units, each filter unit comprises a plurality of filter sets, and each filter set comprises a first filter corresponding to one color in a first color space and a second filter corresponding to one color in a second color space. A first optical filter covers a photosensitive pixel, a second optical filter covers a photosensitive pixel, the photosensitive pixel can receive external light passing through the optical filter corresponding to the photosensitive pixel and generate corresponding electric signals, and an image to be processed can be determined according to the electric signals generated by all the photosensitive pixels. The first color space may include Red (Red, R), green (G), and blue (blue, B). The second color space may include Magenta (M), yellow (Y), and Cyan (Cyan, C).
Referring to fig. 4, in some embodiments, a filter set 400 includes a first filter set 402, two second filter sets 404, and a third filter set 406, where the first filter set 402 is a red filter 4022 and the second filter is a magenta filter 4024; in the second filter set 404, the first filter is a green filter 4042, and the second filter is a yellow filter 4044; in the third filter set 406, the first filter is a blue filter 4062, and the second filter is a cyan filter 4064, that is, the ratio of the G filter 4042 to the Y filter 4044 in one filter set 400 is 25%, and the ratio of the R filter 4022, the B filter 4062, the C filter 4064, and the M filter 4024 is 12.5%. The four filter sets of the filter set 400 are arranged in a 2×2 arrangement, wherein the two second filter sets 404 are distributed along a third diagonal direction E1, and the first filter set 402 and the third filter set 406 are distributed along a fourth diagonal direction E2. Each filter set comprises two first filters and two second filters, the four filters are arranged in a 2 x 2 manner, wherein the two first filters are arranged along a first diagonal direction F1, and the two second filters are arranged along a second diagonal direction F2. The first diagonal direction F1 and the second diagonal direction F2 are only for explaining that the arrangement directions of the first filter and the second filter are not identical, and are not meant to be fixed diagonal directions. Similarly, the third diagonal E1 and the fourth diagonal E2 are only used to illustrate that the two second filter sets 404 are not aligned in the same direction as the first filter set 402 and the third filter set 406, and are not meant to be fixed diagonal directions. The first diagonal direction F1 and the third diagonal direction E1 may refer to 45 ° inclined directions, and the second diagonal direction F2 and the fourth diagonal direction E2 may refer to 45 ° reversely inclined directions.
The image to be processed includes a plurality of pixels, each pixel having a color identical to a color of a corresponding filter, the color of the filter including a plurality of colors, such that the image to be processed includes the plurality of color pixels. In some embodiments, the image to be processed includes R pixels, G pixels, B pixels, M pixels, Y pixels, and C pixels.
The partial image is a part of the image to be processed. The current pixel is located in the center of the partial image. The size of the partial images includes, but is not limited to 5*5, 7*7, 9*9, 11 x 11, 13 x 13, etc. In one example, the local image is 7*7 pixels in size, which can achieve better image processing results.
The pixel variance total value may be determined from pixel values of pixels in the partial image.
The texture direction of the partial image may be a direction in which a gradient of pixels is smallest among the preset directions. Referring to fig. 5, in some embodiments, a direction is divided every 22.5 degrees within 0 to 180 degrees and denoted as E, AD, A, AU, N, DD, D, DU, W and S, respectively, where E and W refer to horizontal directions and N and S refer to vertical directions, that is, the preset directions include horizontal direction (E, W), vertical direction (N, S), diagonal 22.5 ° direction (AD), diagonal 22.5 ° Direction (DU), diagonal 45 ° direction (a), diagonal 45 ° direction (D), diagonal 67.5 ° direction (AU), and diagonal 67.5 ° direction (DD). The first direction may include a diagonal 22.5 ° direction (AD), a anticlockwise 22.5 ° Direction (DU), a diagonal 45 ° direction (a), an anticlockwise 45 ° direction (D), a diagonal 67.5 ° direction (AU), and an anticlockwise 67.5 ° direction (DD). It will be appreciated that in the above embodiment, the preset direction includes 8 directions, and in other embodiments, the preset direction may be divided into 16 directions or more, which is not limited herein.
The preset neural network may be a neural network obtained by training in advance. The target color may be any of a variety of colors of the image to be processed. In some embodiments, the target color is the one with the highest ratio in the image to be processed, such as G or Y.
The preset color may be other colors than the target color among the colors of the image to be processed. In one example, the image to be processed includes R, G, B, C, M, Y, and when the target color is G, the preset color includes R, B, C, M, Y.
It should be noted that the preset colors include a plurality of types, and the second full size image determined in step 017 also includes a plurality of types, each corresponding to one type of second full size image, and only one preset color exists in each type of second full size image. The size of the first full size image and the size of the second full size image are the same as the size of the image to be processed, and the number of pixels of the first full size image and the number of pixels of the second full size image are the same as the number of pixels of the image to be processed.
In some embodiments, the image to be processed includes a plurality of pixel blocks, each pixel block includes a plurality of pixel units, each pixel unit includes two first pixels corresponding to one color in a first color space and two second pixels corresponding to one color in a second color space, the two first pixels are arranged along a first diagonal direction, the two second pixels are arranged along a second diagonal direction, and the target color is one color in the first color space or the second color space.
It can be understood that, compared with the image to be processed only including the color channel corresponding to the first color space in the related art, the image to be processed of the present application includes both the color channel corresponding to the first color space and the color channel corresponding to the second color space, and the increase of the number of the color channels means that the white balance reference information is doubled, which greatly improves the accuracy of gray point detection and light source analysis, and is beneficial to realizing more accurate white balance judgment or other image processing functions.
Specifically, each pixel block may include R, G, B, M, Y, and C pixels, wherein the ratio of G and Y pixels is 25% each, and R, B, C, M pixels are 12.5% each. The first diagonal direction and the second diagonal direction are only used to illustrate that the arrangement directions of the first pixel and the second pixel are not identical, and are not meant to be fixed diagonal directions. The first diagonal direction may be a 45 ° oblique direction and the second diagonal direction may be a 45 ° reverse oblique direction. In other embodiments, the pixels in the pixel unit may be pixels of other colors and/or other arrangements, which are not limited herein.
It should be noted that, in the embodiment of the present application, the color of the pixel refers to the color of the filter corresponding to the pixel, the color of the first pixel is the color of the first filter corresponding to the first pixel, and the color of the second pixel is the color of the second filter corresponding to the second pixel.
Referring to fig. 6, in some embodiments, the partial image includes a plurality of color channels corresponding to a first color space and a second color space, step 011 includes:
0111: determining a pixel variance value of each color channel in the partial image according to the pixel average value of each color channel in the partial image and the pixel value of each color channel;
0113: the sum of the pixel variance values of each color channel is calculated as the total pixel variance value of the partial image.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the first determination module 11 includes a first determination unit and a first calculation unit. The first determining unit is used for determining the pixel variance value of each color channel in the partial image according to the pixel average value of each color channel and the pixel value of each color channel in the partial image. The first calculation unit is configured to calculate a sum of pixel variance values of each color channel as a total pixel variance value of the partial image.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine a pixel variance value of each color channel in the partial image according to the pixel mean value of each color channel and the pixel value of each color channel in the partial image, and calculate a sum value of the pixel variance values of each color channel as a total pixel variance value of the partial image.
In this way, the total pixel variance value of the partial image can be calculated from the pixel variance values of the respective color channels in the partial image.
Specifically, the pixel variance value σ c Can be expressed by the following formula:wherein c represents a color channel, N represents the total number of pixels of the partial image, pixel_value c Pixel value representing the i-th pixel in the partial image, u c Representing the mean value of the pixels in the color channel c of the partial image.
Further, the pixel variance total value σ can be expressed by the following formula: sigma = Σσ c . In some embodiments, when the total pixel variance value is smaller than a preset threshold, determining a local image with the current pixel as a center as a flat area, and directly interpolating the pixel value of the target color at the position of the current pixel according to the constant relationship between the color ratio of the current pixel and the target color pixel; when the total pixel variance value is greater than or equal to a preset threshold value, determining a local image taking the current pixel as a center as a texture area, further calculating the texture direction of the local image, and selecting a corresponding method according to the texture direction to interpolate the current imagePixel values of the target color at the location of the pixel. The interpolation modes of the pixel values of the target color of the current pixel corresponding to the flat area and the texture area are different.
In some embodiments, the preset threshold may be dynamically set according to the screen brightness, for example, 10 may be set when the screen brightness is 100, 15 may be set when the screen brightness is 200, and so on.
Referring to fig. 7, in some embodiments, before step 015, the image processing method further includes:
019: and when the total pixel variance value is smaller than a preset threshold value, determining the pixel value of the target color of the current pixel according to all pixels with the target color and all pixels with the same color as the current pixel in the partial image.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the image processing apparatus 100 further includes a fourth determination module. And the fourth determining module is used for determining the pixel value of the target color of the position of the current pixel according to all pixels with the color being the target color in the partial image and all pixels with the same color as the current pixel when the total pixel variance value is smaller than a preset threshold value.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine, when the total pixel variance value is smaller than a preset threshold, a pixel value of a target color at a position of a current pixel according to all pixels with a target color and all pixels with the same color as the current pixel in the partial image.
In this way, when the partial image is characterized as flat, the pixel value of the target color at the position of the current pixel can be determined.
Specifically, a local image is characterized as a flat region when the pixel variance total value is smaller than a preset threshold value.
In one example, the current pixel is a red pixel, and the target color is green, and when the total pixel variance value is smaller than a preset threshold value, determining a green pixel value of the current pixel according to all green pixels and all red pixels in the partial image.
Referring to fig. 8, in some embodiments, step 019 includes:
0191: determining a first color ratio constant according to a first pixel average value of all pixels with the color being a target color in the partial image and a second pixel average value of all pixels with the same color as the current pixel in the partial image;
0193: and determining the pixel value of the target color of the position of the current pixel according to the first color specific constant and the pixel value of the current pixel.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the fourth determination module includes a second determination unit and a third determination unit. The second determining unit is used for determining a first color ratio constant according to a first pixel average value of all pixels with the color being the target color in the partial image and a second pixel average value of all pixels with the same color as the current pixel in the partial image. The third determining unit is used for determining the pixel value of the target color of the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine a first color ratio constant according to a first pixel average value of all pixels with a color being a target color in the partial image and a second pixel average value of all pixels with a color identical to a color of a current pixel in the partial image, and determine a pixel value of the target color at a position where the current pixel is located according to the first color ratio constant and the pixel value of the current pixel.
Thus, when the local image is characterized as a flat area, the pixel value of the target color of the position of the current pixel can be directly determined according to the color ratio constancy relationship of the local pixel.
Specifically, the first pixel mean, i.e. the pixel mean of all pixels in the partial image whose color is the target color. And a second pixel mean value of all pixels with the same color as the current pixel in the local image. A first color ratio constant, i.e., the ratio of the first pixel mean to the second pixel mean. Further, the product of the first color ratio constant and the pixel value of the current pixel may be used as the pixel value of the target color at the position of the current pixel.
In one example, where the target color is green, the current pixel is a red pixel, and the coordinates of the current pixel are (5, 5), the first color ratio constant may be expressed by the following equation: ratio_rg=mean_g/mean_r, where mean_g represents the first pixel mean of all green pixels in the partial image and mean_r represents the second pixel mean of all red pixels in the partial image, so that the green pixel values at coordinates (5, 5) are: g (5, 5) =r (5, 5) ×ratio_rg, R (5, 5) represents the pixel value of the current pixel.
Referring to fig. 9, in some embodiments, before step 015, the image processing method further includes:
021: and when the total pixel variance value is greater than or equal to a preset threshold value and the texture direction of the partial image is the second direction, determining the pixel value of the target color of the current pixel according to the pixel with the color in the second direction as the target color.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the image processing apparatus 100 further includes a fifth determination module. And the fifth determining module is used for determining the pixel value of the target color of the current pixel according to the pixel with the color in the second direction as the target color when the total pixel variance value is larger than or equal to the preset threshold value and the texture direction of the local image is in the second direction.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine, when the total pixel variance value is greater than or equal to a preset threshold and the texture direction of the partial image is the second direction, a pixel value of a target color at a position where the current pixel is located according to a pixel of the target color in the second direction.
In this way, when the partial image is characterized as a texture region and the texture direction is the second direction, the pixel value of the target color at the position of the current pixel can be determined.
Specifically, the local image is characterized as a texture region when the pixel variance total value is greater than or equal to a preset threshold value. The second direction may be understood as other directions than the first direction among the preset directions.
The technical solution and advantageous effects of the present application will be described below by taking preset directions including a horizontal direction (E, W), a vertical direction (N, S), an oblique 22.5 ° direction (AD), an anticlockwise 22.5 ° Direction (DU), an oblique 45 ° direction (a), an anticlockwise 45 ° direction (D), an oblique 67.5 ° direction (AU), an anticlockwise 67.5 ° direction (DD), a first direction including the oblique 22.5 ° direction (AD), the anticlockwise 22.5 ° Direction (DU), the oblique 45 ° direction (a), the anticlockwise 45 ° direction (D), an oblique 67.5 ° direction (AU) and an anticlockwise 67.5 ° direction (DD), and a second direction including the horizontal direction and the vertical direction as examples. It will be appreciated that in other embodiments, the second direction may be other predetermined directions, such as 45 ° oblique and 45 ° reverse, which are not limited herein.
It can be understood that when the local image is characterized as a texture area and the texture direction is in the horizontal direction or the vertical direction, an undesirable effect may occur due to interpolation of the pixel value of the target color at the current pixel position by using the preset neural network, so that the pixel value of the target color at the current pixel position can be determined according to step 021, and thus a better interpolation effect can be obtained.
In one example, when the current pixel is a red pixel, the target color is green, and the texture direction of the partial image is the horizontal direction in the second direction, when the total pixel variance value is greater than or equal to a preset threshold value, determining a green pixel value of the current pixel according to the green pixel in the horizontal direction in the partial image.
Referring to fig. 10, taking the second direction as the horizontal direction or the vertical direction as an example, it is to be understood that, for a current pixel whose color is not the target color, a pixel value of the target color at the position of the current pixel may be determined by selecting a corresponding method of the above steps 013, 019 and 021 according to the total pixel variance value of the local image where the current pixel is located and the texture direction of the local image.
Referring to fig. 11, in some embodiments, step 021 comprises:
0211: the current pixel is taken as a center, and two pixels with the color closest to the current pixel in the second direction as a target color are determined;
0213: and taking the average value of the pixel values of the two pixels as the pixel value of the target color of the position where the current pixel is located.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the fifth determination module includes a fourth determination unit and a second calculation unit. The fourth determination unit is configured to determine, with the current pixel as a center, two pixels whose colors closest to the current pixel in the second direction are target colors. The second calculation unit is used for taking the average value of the pixel values of the two pixels as the pixel value of the target color of the position where the current pixel is located.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine, with the current pixel as a center, two pixels having a color closest to the current pixel in the second direction as a target color, and determine an average value of pixel values of the two pixels as a pixel value of the target color at a position where the current pixel is located.
Thus, when the local image is characterized as a texture region and the texture direction is the second direction, the pixel value of the target color at the position where the current pixel is located can be determined according to the two pixels of which the color closest to the current pixel in the second direction is the target color.
Referring to fig. 12, in one example, the target color is green, the texture direction of the partial image is the horizontal direction (E, W) in the second direction, the coordinates of the current pixel are (5, 5), and the two green pixels closest to the current pixel in the horizontal direction are determined to be G (5, 3) and G (5, 7), respectively, based on the current pixel, and then the green pixel values at the coordinates (5, 5) can be expressed as: g (5, 5) = (G (5, 3) +g (5, 7))/2.
With continued reference to fig. 12, in another example, the target color is green, the texture direction of the partial image is the vertical direction (N, S) in the second direction, the coordinates of the current pixel are (5, 5), the two green pixels closest to the current pixel in the vertical direction are determined to be G (3, 5) and G (7, 5), respectively, with the current pixel as the center, and then the green pixel values at the coordinates (5, 5) can be expressed as: g (5, 5) = (G (3, 5) +g (7, 5))/2.
Referring to fig. 13, in some embodiments, step 013 comprises:
0131: determining a third full-size image with a target color according to the image to be processed and a preset neural network;
0133: and determining the pixel value of the target color of the position of the current pixel according to the third full-size image.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the second determination module 13 includes a fifth determination unit and a sixth determination unit. The fifth determining unit is used for determining a third full-size image with a target color according to the image to be processed and the preset neural network. The sixth determining unit is used for determining the pixel value of the target color of the position of the current pixel according to the third full-size image.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine a third full size image with a target color according to the image to be processed and the preset neural network, and determine a pixel value of the target color of the current pixel according to the third full size image.
Therefore, when the total pixel variance value is larger than or equal to the preset threshold value and the texture direction of the local image is the first direction, a preset neural network is adopted to determine the pixel value of the target color of the position where the current pixel is located, so that a good effect can be obtained.
Specifically, the size of the third full size image is the same as the size of the image to be processed, and the number of pixels of the third full size image is the same as the number of pixels of the image to be processed. In step 0133, the pixel value of the pixel in the third full-size image, which is the same as the coordinate of the current pixel, is used as the pixel value of the target color of the position where the current pixel is located. It will be appreciated that although the color of each pixel in the third full-size image is the target color, the third full-size image cannot be directly used as the first full-size image, and the pixel value of the pixel in the third full-size image, which is the same as the coordinate of the current pixel, can be used as the pixel value of the pixel of the target color where the current pixel is located only when the total pixel variance value of the partial image corresponding to the current pixel is greater than or equal to the preset threshold and the texture direction is the first direction.
In one example, if the coordinates of the current pixel are (5, 5), and the total pixel variance value of the local image corresponding to the current pixel is equal to the preset threshold, and the texture direction of the local image corresponding to the current pixel is the first direction, the pixel value of the pixel with the coordinates of (5, 5) in the third full-size image is taken as the pixel value of the target color of the position where the current pixel is located.
Referring to fig. 14, in some embodiments, the image to be processed includes a plurality of preset windows, each preset window includes a plurality of pixel blocks, and step 0131 includes:
01311: merging pixels at the same position in each pixel block in a preset window to serve as an input subgraph;
01313: and determining a third full-size image with the target color according to the input subgraph and the preset neural network.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the fifth determination unit includes a generation subunit and a determination subunit. The generating subunit is used for merging pixels at the same position in each pixel block in the preset window to serve as an input subgraph. The determining subunit is used for determining a third full-size image with the target color according to the input subgraph and the preset neural network.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to combine pixels at the same position in each pixel block within the preset window as an input sub-image, and determine a third full-size image having the target color according to the input sub-image and the preset neural network.
As such, a third full size image having the target color is determined based on the preset neural network.
Specifically, the size of the predetermined window is an integer multiple of the size of the pixel block, for example, when the size of the pixel block is 4*4, the size of the predetermined window may be 8×8, 16×16, 32×32, 64×64, and so on. It can be appreciated that when the size of the preset window is 8×8, one preset window includes 4 pixel blocks; when the size of the preset window is 16×16, one preset window includes 16 pixel blocks; when the size of the preset window is 32×32, one preset window includes 64 pixel blocks; when the size of the preset window is 64×64, one preset window includes 256 pixel blocks. In some embodiments, the size of the preset window is not less than 32×32, so that a better operation result can be ensured.
An input sub-graph can be understood as a collection of co-located pixels for each pixel block within a preset window. Since each pixel block includes a plurality of pixels, a plurality of input subgraphs may be generated according to a preset window. The number of input subgraphs is the same as the number of pixels in one pixel block. The number of pixels of the same input sub-picture is the same as the number of pixel blocks in the preset window. The color of each pixel of the same input sub-graph is the same. And inputting all input subgraphs corresponding to one preset window into a preset neural network to obtain partial third full-size images corresponding to the one preset window, and performing the processing on all input subgraphs corresponding to each preset window of the image to be processed to obtain the complete third full-size images corresponding to the image to be processed.
Referring to fig. 15 and 16, the following describes the technical solution of the above embodiment in detail, taking the size of the pixel block as 4*4 and the size of the preset window as 12×12 as an example.
Fig. 15 is a schematic view of a pixel block 500, and in the example of fig. 15, one pixel block 500 includes one first pixel unit 502, two second pixel units 504, and one third pixel unit 506, where the first pixel unit 502 is a red pixel, and the second pixel is a magenta pixel; in the second pixel unit 504, the first pixel is a green pixel, and the second pixel is a yellow pixel; in the third pixel unit 506, the first pixel is a blue pixel, and the second pixel is a cyan pixel. The pixel block includes 16 positions, and according to the positions, the red pixels may be denoted as R2 and R5, the magenta pixels may be denoted as M1 and M6, the green pixels may be denoted as G4, G7, G10 and G13, the yellow pixels may be denoted as Y3, Y8, Y9 and Y14, the blue pixels may be denoted as B12 and B15, and the cyan pixels may be denoted as C11 and C16.
Fig. 16 is a schematic view of a scene in which input subgraphs are generated from preset windows, one preset window comprising 9 blocks of pixels in the example of fig. 16. The pixels at the same position in each pixel block are gathered into one input subgraph, and 16 input subgraphs can be obtained. Each input sub-image comprises 9 pixels of the same color. Further, the 16 input subgraphs are used as input images to be input into a preset neural network, so that partial third full-size images corresponding to the preset window can be obtained.
Referring to fig. 17, in some embodiments, the preset neural network includes a convolutional neural network, the convolutional neural network includes a first network structure, a second network structure, a third network structure, a fourth network structure, a fifth network structure, a sixth network structure, and a branch network structure, each of the first network structure, the third network structure, the fourth network structure, the sixth network structure, and the branch network structure includes a convolutional kernel having a size of 3*3, and each of the second network structure and the fifth network structure includes a convolutional kernel having a size of 1*1.
Specifically, the output end of the first network structure is connected with the input end of the second network structure, the output end of the second network structure is connected with the input end of the third network structure, the output end of the third network structure is connected with the input end of the fourth network structure, the output end of the fourth network structure is connected with the input end of the fifth network structure, and the output end of the fifth network structure is connected with the input end of the sixth network structure. The first network structure comprises a convolution kernel of 3*3, the input of the first network structure is 16 input subgraphs, and the output of the first network structure is 256 first output subgraphs; the second network structure comprises a convolution kernel of 1*1, the input of the second network structure is 256 first output subgraphs, and the output of the second network structure is 128 second output subgraphs; the third network structure comprises a convolution kernel of 3*3, the input of the third network structure is 128 second output subgraphs, and the output of the third network structure is 128 third output subgraphs; the fourth network structure comprises a convolution kernel of 3*3, the input of the fourth network structure is 128 third output subgraphs, and the output of the fourth network structure is 128 fourth output subgraphs; the fifth network structure comprises a convolution kernel of 1*1, the input of the fifth network structure is 128 fourth output subgraphs, and the output of the fifth network structure is 128 fifth output subgraphs; the sixth network structure includes a convolution kernel of 3*3, the input of the sixth network structure is 128 fifth output subgraphs, and the output of the sixth network structure is 16 sixth output subgraphs.
The branched network structure connects the input of the first network structure and the output of the sixth network structure, the branched network structure comprising a convolution kernel of size 3*3. The 16 input subgraphs of the input convolutional neural network are added to the 16 sixth output subgraphs output by the sixth network structure after being processed by the branch network structure, and the convolutional neural network is output. Thus, based on the structure of the residual network, the fitting capacity of the convolutional neural network is increased.
Referring to fig. 18, in some embodiments, the target color includes green, the preset color includes red, blue, cyan, magenta, and yellow, and step 017 includes:
0171: taking the first full-size image as a guide image, and interpolating red pixel values of positions of other pixels except the red pixels in the image to be processed in a filtering mode to obtain a second full-size image with red;
0173: taking the first full-size image as a guide image, and interpolating blue pixel values of positions of other pixels except blue pixels in the image to be processed in a filtering mode to obtain a second full-size image of blue;
0175: taking the first full-size image as a guide image, and interpolating cyan pixel values of positions of other pixels except the cyan pixel in the image to be processed in a filtering mode to obtain a second full-size image of cyan;
0177: taking the first full-size image as a guide image, and interpolating magenta pixel values of positions of other pixels except magenta pixels in the image to be processed in a filtering mode to obtain a second full-size image of magenta;
0179: and taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixels in the image to be processed in a filtering mode to obtain a second full-size image of yellow.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the third determination module 17 includes a first filtering unit, a second filtering unit, a third filtering unit, a fourth filtering unit, and a fifth filtering unit. The first filtering unit is used for taking the first full-size image as a guide image, and interpolating red pixel values of positions of other pixels except the red pixels in the image to be processed in a filtering mode to obtain a second full-size image with red. The second filtering unit is used for taking the first full-size image as a guide image, and interpolating blue pixel values of positions of other pixels except the blue pixel in the image to be processed in a filtering mode to obtain a second full-size image with blue color. The third filtering unit is used for taking the first full-size image as a guide image, and interpolating cyan pixel values of positions of other pixels except the cyan pixel in the image to be processed in a filtering mode to obtain a second full-size image of cyan. The fourth filtering unit is used for taking the first full-size image as a guide image, and interpolating magenta pixel values of positions of other pixels except the magenta pixel in the image to be processed in a filtering mode to obtain a second full-size image of magenta. And the fifth filtering unit is used for taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixels in the image to be processed in a filtering mode to obtain a second full-size image of yellow.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to interpolate, by using a filtering manner, a red pixel value of a position of a pixel other than a red pixel in the image to be processed to obtain a red second full size image, and to interpolate, by using a filtering manner, a blue pixel value of a position of a pixel other than a blue pixel in the image to be processed to obtain a blue second full size image, and to interpolate, by using a filtering manner, a cyan pixel value of a position of a pixel other than a cyan pixel in the image to be processed to obtain a cyan second full size image, and to interpolate, by using a filtering manner, a first full size image as a guide image, a magenta pixel value of a position of a pixel other than a magenta pixel in the image to obtain a magenta second full size image, and to interpolate, by using a filtering manner, a first full size image as a guide image, and to interpolate, by using a filtering manner, a yellow pixel value of a position of a pixel other than a yellow pixel in the image to obtain a yellow image.
In this way, in the case where the first full-size image having the target color has been obtained, a plurality of second full-size images having the preset color can be obtained by means of filtering.
In particular, the filtering means may include guided filtering or joint bilateral filtering. The guided filtering and the joint bilateral filtering are two different fusion strategies that can both be used to guide the color channels to be interpolated and to patch the missing pixel values. The following will describe the scheme of the embodiment of the present application in detail by taking joint bilateral filtering as an example. It will be appreciated that the technical solution of the embodiments of the present application may also be implemented using guided filtering, which is not limited herein.
Referring to fig. 19, the image corresponding to the color channel of each preset color (for example, magenta) in the image to be processed is respectively used as the image I to be filtered, the first full-size image is used as the guiding image I', and the second full-size image (the output image J) is obtained after the combined bilateral filtering. The process of joint bilateral filtering can be expressed by the following formula: wherein k is p =∑ q∈Ω f(||p-q||)g(||I p ′-I q ′||),J p To output the pixel value, k of the image p For the weight sum, Ω is the filter window (may be 7*7 size), p is the coordinates of the pixel points to be filtered in the image to be filtered, q is the coordinates of the pixel points in the filter window in the image to be filtered, I q For the pixel value corresponding to the q point, I p ' is the pixel value corresponding to the pixel point to be filtered in the guiding image, I q ' is the pixel value corresponding to the q point in the guide image. f represents the weight corresponding to each coordinate of the filter window, and is fixed, and the weight is larger when the weight is closer to the center of the filter window. g represents the weight of the difference between the pixels at other positions and the center pixel, and the larger the difference is, the smaller the weight is.
Further, taking a magenta color channel as an example, if the coordinates of the pixel to be filtered are (i, j), the pixel value interpolation of the magenta pixel at the coordinates (i, j) can be expressed as: m (I, j) =g (I, j) =meanm/meanG, where meang=sum (sum (hf..i')), meanm=sum (sum (hf..i)). I' represents a G pixel window (which may be 7*7 size) of the guide image. I represents an M-pixel window (which may be 7*7 size) of the image to be filtered, where there are no M pixels, the matrix value is 0.meanG represents obtaining a weighted sum of G pixel portions. meanM represents a weighted sum of the local M pixels. H represents a distance weight matrix. F represents the pixel difference weight matrix, which is the dot product of the F and g matrices. HF represents the dot product of H and F.
Referring to fig. 20, in some embodiments, after step 011, the image processing method further includes:
023: determining a pixel gradient value of the local image in a preset direction;
025: and taking the preset direction with the minimum pixel gradient value as the texture direction of the local image.
The image processing method of the above-described embodiment may be implemented by the image processing apparatus 100 of the embodiment of the present application. Specifically, the image processing apparatus 100 further includes a sixth determination module and a seventh determination module. The sixth determining module is used for determining pixel gradient values of the local image in a preset direction. The seventh determining module is used for taking the preset direction with the minimum pixel gradient value as the texture direction of the local image.
The image processing method of the above-described embodiment may be implemented by the electronic apparatus 1000 of the embodiment of the present application. Specifically, the processor 200 is configured to determine a pixel gradient value of the local image in a preset direction, and is configured to use a preset direction with a minimum pixel gradient value as a texture direction of the local image.
Thus, the direction in which the pixel gradient value is the smallest in the preset direction can be determined.
Specifically, when the filter set is in the arrangement shown in fig. 4, the preset directions may include a horizontal direction (E, W), a vertical direction (N, S), an oblique 22.5 ° direction (AD), an oblique 22.5 ° Direction (DU), an oblique 45 ° direction (a), an oblique 45 ° direction (D), an oblique 67.5 ° direction (AU), and an oblique 67.5 ° direction (DD). The first direction may include a 45 ° oblique direction and a 45 ° reverse oblique direction. The second direction is other than the first direction among the preset directions, and when the first direction includes a 45 ° oblique direction and a 45 ° anticlockwise direction, the second direction may include a horizontal direction, a vertical direction, a 22.5 ° oblique direction, a 22.5 ° anticlockwise direction, a 67.5 ° oblique direction, and a 67.5 ° anticlockwise direction.
The pixel gradient value may be determined by accumulating absolute values of differences between a plurality of pixels. For example, please refer to fig. 5, the pixel gradient value grad_e in the horizontal direction E can be determined by the following formula: grad_e=abs (raw (5, 5) -raw (5, 9)) +abs (raw (4, 5) -raw (4, 9)) +abs (raw (6, 5) -raw (6, 9)) +abs (raw (6, 5) -raw (5, 6) -raw (5, 8)) +abs (raw (4, 5) -raw (4, 6) +raw (2, 6)) +abs (raw (6, 5) -raw (6, 6) +raw (6, 7)). The pixel gradient value grad_n in the vertical direction N can be determined by the following formula: grad_n=abs (raw (5, 5) -raw (1, 5)) +abs (raw (5, 6) -raw (1, 6)) +abs (raw (5, 4) -raw (1, 4)) +abs (raw (5, 5) -raw (2, 5) -raw (4, 5)) +abs (raw (5, 6) -raw (3, 6) -raw (2, 6)) +abs (raw (5, 4) -raw (3, 4) -raw (2, 4)). The pixel gradient value grad_a for the 45 ° oblique direction a can be determined by the following formula: grad_a=abs (raw (5, 5) -raw (1, 9)) +abs (raw (5, 5) -raw (9, 1)) +abs (raw (4, 6) -raw (6, 4)) +abs (raw (7, 3) -raw (3, 7)) +abs (1, 5) -raw (9, 1)) +abs (4, 6) -raw (6, 4)) +abs (7)
abs (raw (7, 4) -raw (4, 7)) +abs (raw (6, 3) -raw (3, 6)) +abs (raw (8, 4) -raw (4, 8)). The pixel gradient value grad_ad of the anticlockwise 45 ° direction AD can be determined by the following formula: grad_ad=abs (raw (5, 4) -raw (4, 9)) +abs (raw (6, 3) -raw (5, 8)) +abs (raw (6, 4) -raw (5, 7)) +abs (raw (5, 3) -raw (4, 6)) +abs (raw (5, 3)) +abs (6)
abs(raw(5,5)-raw(6,2))+abs(raw(4,4)-raw(3,7))+abs(raw(4,3)-raw(3,8))。
In some embodiments, let grad= [ grad_n grad_s grad_e grad_w grad_ad grad_a grad_au grad_dd grad_d grad_du ], calculate [ Mingrad, dir ] = min (Grad), if dir=1 or 2, then determine that the texture direction is vertical; if dir=3 or 4 is obtained, the texture direction can be determined to be the horizontal direction; if dir=5 is obtained, the texture direction can be determined to be inclined by 22.5 degrees; if dir=6 is obtained, the texture direction can be determined to be inclined 45 degrees; if dir=7 is obtained, the texture direction can be determined to be inclined 67.5 degrees; if dir=8 is obtained, the texture direction can be determined to be in a anticlockwise 67.5 DEG direction; if dir=9 is obtained, the texture direction can be determined to be in a reverse 45-degree direction; if dir=10 is obtained, the texture direction can be determined to be the anticlockwise 22.5 ° direction.
It is noted that the specific values mentioned above are only for the purpose of illustrating the implementation of the present application in detail and are not to be construed as limiting the present application. In other examples or embodiments or examples, other values may be selected according to the present application, without specific limitation.
The computer-readable storage medium according to an embodiment of the present application stores a computer program that, when executed by a processor, implements the steps of the image processing method according to any of the above embodiments.
For example, in the case where the program is executed by a processor, the steps of the following image processing method are implemented:
011: when the color of the current pixel in the image to be processed is not the target color, determining a pixel variance total value of a local image taking the current pixel as a center;
013: when the total pixel variance value is greater than or equal to a preset threshold value and the texture direction of the local image is the first direction, determining the pixel value of the target color of the position of the current pixel according to a preset neural network;
015: respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color;
017: and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
It is understood that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, executable files, or in some intermediate form, among others. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth. The processor may be a central processing unit, but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (15)

1. An image processing method, characterized in that the image processing method comprises:
when the color of the current pixel in the image to be processed is not the target color, determining the total pixel variance value of a local image taking the current pixel as the center;
when the pixel variance total value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining a pixel value of the target color of the position of the current pixel according to a preset neural network;
Respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color;
determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color;
the pixel variance total value is the sum of pixel variance values of all color channels in the partial image.
2. The image processing method according to claim 1, wherein the partial image includes a plurality of color channels corresponding to a first color space and a second color space, and the determining a pixel variance total value of the partial image centered on the current pixel includes:
determining a pixel variance value of each color channel in the partial image according to the pixel mean value of each color channel in the partial image and the pixel value of each color channel;
a sum of the pixel variance values of each of the color channels is calculated as the pixel variance total value of the partial image.
3. The image processing method according to claim 1, characterized in that before said each pixel in said image to be processed is treated as said current pixel and processed to obtain a first full size image having said target color, respectively, the image processing method further comprises:
And when the total pixel variance value is smaller than the preset threshold value, determining the pixel value of the target color of the position where the current pixel is located according to all pixels with the color of the target color and all pixels with the same color as the current pixel in the partial image.
4. The image processing method according to claim 3, wherein the determining the pixel value of the target color at the position of the current pixel according to all pixels having the same color as the current pixel and all pixels having the same color as the target color in the partial image includes:
determining a first color ratio constant according to a first pixel average value of all pixels with the color of the target color in the partial image and a second pixel average value of all pixels with the same color as the current pixel in the partial image;
and determining the pixel value of the target color of the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
5. The image processing method according to claim 1, wherein the determining the pixel value of the target color of the current pixel according to a preset neural network includes:
Determining a third full-size image with the target color according to the image to be processed and the preset neural network;
and determining the pixel value of the target color of the position of the current pixel according to the third full-size image.
6. The image processing method according to claim 5, wherein the image to be processed includes a plurality of preset windows, each of the preset windows including a plurality of the pixel blocks, the determining a third full size image having the target color from the image to be processed and the preset neural network includes:
merging pixels at the same position in each pixel block in the preset window to serve as an input subgraph;
and determining the third full-size image with the target color according to the input subgraph and the preset neural network.
7. The image processing method of claim 6, wherein the predetermined neural network comprises a convolutional neural network, the convolutional neural network comprising a first network structure, a second network structure, a third network structure, a fourth network structure, a fifth network structure, a sixth network structure, and a branch network structure, the first network structure, the third network structure, the fourth network structure, the sixth network structure, and the branch network structure each comprising a convolutional kernel of size 3*3, the second network structure, and the fifth network structure each comprising a convolutional kernel of size 1*1.
8. The image processing method according to claim 1, characterized in that before said each pixel in said image to be processed is treated as said current pixel and processed to obtain a first full size image having said target color, respectively, the image processing method further comprises:
and when the pixel variance total value is greater than or equal to the preset threshold value and the texture direction of the partial image is a second direction, determining the pixel value of the target color of the position where the current pixel is located according to the pixel of which the color in the second direction is the target color.
9. The image processing method according to claim 8, wherein the determining the pixel value of the target color for the pixel of the target color according to the color in the second direction at the position of the current pixel includes:
the current pixel is taken as a center, and two pixels with the colors closest to the current pixel in the second direction as the target colors are determined;
and taking the average value of the pixel values of the two pixels as the pixel value of the target color of the position of the current pixel.
10. The image processing method according to claim 1, wherein the target color includes green, the preset color includes red, blue, cyan, magenta, and yellow, and the determining a second full size image having a preset color from the first full size image and the image to be processed includes:
Taking the first full-size image as a guide image, and interpolating red pixel values of positions of other pixels except red pixels in the image to be processed in a filtering mode to obtain a second full-size image with red;
taking the first full-size image as a guide image, and interpolating blue pixel values of positions of other pixels except blue pixels in the image to be processed in a filtering mode to obtain a blue second full-size image;
taking the first full-size image as a guide image, and interpolating cyan pixel values of positions of other pixels except the cyan pixel in the image to be processed in a filtering mode to obtain a second full-size image of cyan;
taking the first full-size image as a guide image, and interpolating magenta pixel values of positions of other pixels except magenta pixels in the image to be processed in a filtering mode to obtain a second full-size image of magenta;
and taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixels in the image to be processed in a filtering mode to obtain a second full-size image of yellow.
11. The image processing method according to claim 1, wherein, when a color of a current pixel in an image to be processed is not a target color, after determining a pixel variance of a partial image centered on the current pixel, the image processing method further comprises:
determining a pixel gradient value of the local image in a preset direction;
and taking the preset direction with the minimum pixel gradient value as the texture direction of the local image.
12. The image processing method according to any one of claims 1 to 11, wherein the image to be processed includes a plurality of pixel blocks, each of the pixel blocks includes a plurality of pixel units, each of the pixel units includes two first pixels corresponding to one color in a first color space and two second pixels corresponding to one color in a second color space, the two first pixels are arranged in a first diagonal direction, the two second pixels are arranged in a second diagonal direction, and the target color is one color of the first color space or the second color space.
13. An image processing apparatus, characterized in that the image processing apparatus comprises:
a first determining module, configured to determine a total pixel variance value of a local image centered on a current pixel in an image to be processed when a color of the current pixel is not a target color;
The second determining module is used for determining the pixel value of the target color of the position where the current pixel is located according to a preset neural network when the total pixel variance value is greater than or equal to a preset threshold value and the texture direction of the local image is the first direction;
the processing module is used for respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color;
a third determining module, configured to determine a second full-size image with a preset color according to the first full-size image and the image to be processed, where the preset color is different from the target color;
the pixel variance total value is the sum of pixel variance values of all color channels in the partial image.
14. An electronic device comprising one or more processors and a memory storing a computer program which, when executed by the processor, implements the steps of the image processing method of any of claims 1-12.
15. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the image processing method of any of claims 1-12.
CN202111084966.4A 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium Active CN113781350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111084966.4A CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111084966.4A CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113781350A CN113781350A (en) 2021-12-10
CN113781350B true CN113781350B (en) 2023-11-24

Family

ID=78844449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111084966.4A Active CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113781350B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201322450D0 (en) * 2013-12-18 2014-02-05 Imagination Tech Ltd Defence pixel fixing
WO2015093253A1 (en) * 2013-12-20 2015-06-25 株式会社メガチップス Pixel interpolation apparatus, image capture apparatus, program, and integrated circuit
CN106530252A (en) * 2016-11-08 2017-03-22 北京小米移动软件有限公司 Image processing method and device
CN109636753A (en) * 2018-12-11 2019-04-16 珠海奔图电子有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110189339A (en) * 2019-06-03 2019-08-30 重庆大学 The active profile of depth map auxiliary scratches drawing method and system
CN112053417A (en) * 2019-06-06 2020-12-08 西安诺瓦星云科技股份有限公司 Image processing method, apparatus and system, and computer-readable storage medium
CN112598758A (en) * 2020-10-22 2021-04-02 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112801882A (en) * 2019-11-14 2021-05-14 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112843697A (en) * 2021-02-02 2021-05-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and computer equipment
CN112999654A (en) * 2021-03-04 2021-06-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113362261A (en) * 2020-03-04 2021-09-07 杭州海康威视数字技术股份有限公司 Image fusion method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201322450D0 (en) * 2013-12-18 2014-02-05 Imagination Tech Ltd Defence pixel fixing
WO2015093253A1 (en) * 2013-12-20 2015-06-25 株式会社メガチップス Pixel interpolation apparatus, image capture apparatus, program, and integrated circuit
CN106530252A (en) * 2016-11-08 2017-03-22 北京小米移动软件有限公司 Image processing method and device
CN109636753A (en) * 2018-12-11 2019-04-16 珠海奔图电子有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110189339A (en) * 2019-06-03 2019-08-30 重庆大学 The active profile of depth map auxiliary scratches drawing method and system
CN112053417A (en) * 2019-06-06 2020-12-08 西安诺瓦星云科技股份有限公司 Image processing method, apparatus and system, and computer-readable storage medium
CN112801882A (en) * 2019-11-14 2021-05-14 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN113362261A (en) * 2020-03-04 2021-09-07 杭州海康威视数字技术股份有限公司 Image fusion method
CN112598758A (en) * 2020-10-22 2021-04-02 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112843697A (en) * 2021-02-02 2021-05-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and computer equipment
CN112999654A (en) * 2021-03-04 2021-06-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于灰度直方图反向投影的织物疵点图像分割;孙国栋等;《制造业自动化》;全文 *
结合纹理特征的Camshift目标跟踪算法研究;杨磊等;《电子设计工程》;全文 *

Also Published As

Publication number Publication date
CN113781350A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
JP5529048B2 (en) Interpolation system and method
US8237831B2 (en) Four-channel color filter array interpolation
US8203633B2 (en) Four-channel color filter array pattern
WO2021047345A1 (en) Image noise reduction method and apparatus, and storage medium and electronic device
CN110557584B (en) Image processing method and device, and computer readable storage medium
WO2010141055A2 (en) Color filter array pattern having four-channels
CN113168669B (en) Image processing method, device, electronic equipment and readable storage medium
CN113170061B (en) Image sensor, imaging device, electronic apparatus, image processing system, and signal processing method
JP2008070853A (en) Compensation method of image array data
Chen et al. Effective demosaicking algorithm based on edge property for color filter arrays
JP2000134634A (en) Image converting method
CN110430403B (en) Image processing method and device
WO2024027287A1 (en) Image processing system and method, and computer-readable medium and electronic device
JP2015139141A (en) image processing apparatus, image processing method and program
CN108307162B (en) Efficient and flexible color processor
CN113781350B (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP6415094B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN109672810B (en) Image processing apparatus, image processing method, and storage medium
JP5484015B2 (en) Imaging apparatus, imaging method, and program
CN115187487A (en) Image processing method and device, electronic device and storage medium
JP2010233241A (en) Image processing apparatus, image processing method, program and recording medium
Chung et al. New joint demosaicing and zooming algorithm for color filter array
CN113781349A (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP5818568B2 (en) Image processing apparatus and control method thereof
JP2020091910A (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant