CN113781350A - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN113781350A
CN113781350A CN202111084966.4A CN202111084966A CN113781350A CN 113781350 A CN113781350 A CN 113781350A CN 202111084966 A CN202111084966 A CN 202111084966A CN 113781350 A CN113781350 A CN 113781350A
Authority
CN
China
Prior art keywords
pixel
image
color
full
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111084966.4A
Other languages
Chinese (zh)
Other versions
CN113781350B (en
Inventor
邓宇帆
李小涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111084966.4A priority Critical patent/CN113781350B/en
Publication of CN113781350A publication Critical patent/CN113781350A/en
Application granted granted Critical
Publication of CN113781350B publication Critical patent/CN113781350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium. The image processing method comprises the following steps: when the color of a current pixel in an image to be processed is not a target color, determining a pixel variance total value of a local image taking the current pixel as a center; when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining the pixel value of the target color at the position of the current pixel according to a preset neural network; respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color; and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color. The image processing method, the image processing device, the electronic equipment and the storage medium provide a basis for multi-channel color restoration.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
In the related art, a camera includes a filter array including a plurality of filters of a plurality of colors and a photosensitive pixel array including a plurality of photosensitive pixels, one filter corresponds to one photosensitive pixel, external light can be irradiated on the photosensitive pixel through the filter, the photosensitive pixel can convert a received optical signal into an electrical signal and output the electrical signal, and the output electrical signal is processed by a series of algorithms to obtain a target image. However, since the filter arrays are arranged in various ways, such as a bayer array, a quadbayer array, an RGBW array, etc., the processing algorithms corresponding to the filter arrays arranged in different ways may be different, and therefore, the corresponding processing algorithms need to be designed for the filter arrays arranged in different ways.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a storage medium.
The image processing method of the embodiment of the application comprises the following steps: when the color of a current pixel in an image to be processed is not a target color, determining a pixel variance total value of a local image taking the current pixel as a center; when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining the pixel value of the target color at the position of the current pixel according to a preset neural network; respectively taking each pixel in the image to be processed as the current pixel and processing to obtain a first full-size image with the target color; and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
The image processing device of the embodiment of the application comprises a first determining module, a second determining module, a processing module and a third determining module. The first determining module is used for determining a pixel variance total value of a local image taking a current pixel as a center when the color of the current pixel in the image to be processed is not a target color. The second determining module is used for determining the pixel value of the target color at the position of the current pixel according to a preset neural network when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction. The processing module is used for respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color. The third determining module is used for determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
The electronic device of embodiments of the present application includes one or more processors and memory. The memory stores a computer program. The steps of the image processing method according to the above-described embodiment are implemented when the computer program is executed by the processor.
The computer-readable storage medium of the present embodiment stores thereon a computer program, which is characterized by realizing the steps of the image processing method described in the above embodiment when the program is executed by a processor.
In the image processing method, the image processing device, the electronic device and the storage medium, the first full-size image with the target color can be obtained by interpolating the pixel value of the target color at the position of the pixel which is not the target color, and then the second full-size image with the preset color can be obtained according to the first full-size image and the image to be processed, so that the demosaicing effect is realized, the resolution of the image of each color channel in the full-size mode is improved, and a basis is provided for the color restoration of multiple channels.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic view of an electronic device of an embodiment of the present application;
FIG. 4 is a schematic view of a filter set of an electronic device according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a preset direction of an image processing method according to an embodiment of the present application;
6-9 are flow diagrams of image processing methods of embodiments of the present application;
FIG. 10 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
FIG. 11 is a flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 12 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
fig. 13 is a flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 14 is a flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 15 is a schematic diagram of a pixel block of an image to be processed of the image processing method according to the embodiment of the present application;
fig. 16 is a scene schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a convolutional neural network of an image processing method of an embodiment of the present application;
fig. 18 is a flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 19 is a scene schematic diagram of an image processing method according to an embodiment of the present application;
fig. 20 is a flowchart illustrating an image processing method according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the embodiments of the present application, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 to 3, an image processing method according to an embodiment of the present disclosure includes:
011: when the color of a current pixel in an image to be processed is not a target color, determining a pixel variance total value of a local image taking the current pixel as a center;
013: when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining the pixel value of the target color at the position of the current pixel according to a preset neural network;
015: respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color;
017: and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
The image processing method according to the embodiment of the present application can be realized by the image processing apparatus 100 according to the embodiment of the present application. Specifically, the image processing apparatus 100 includes a first determination module 11, a second determination module 13, a processing module 15, and a third determination module 17. The first determining module 11 is configured to determine a total value of pixel variances of a local image centered on a current pixel when a color of the current pixel in the image to be processed is not a target color. The second determining module 13 is configured to determine, according to a preset neural network, a pixel value of a target color at a position where a current pixel is located when the total pixel variance value is greater than or equal to a preset threshold and the texture direction of the local image is the first direction. The processing module 15 is configured to respectively take each pixel in the image to be processed as a current pixel and perform processing to obtain a first full-size image with a target color. The third determining module 17 is configured to determine a second full-size image with a preset color according to the first full-size image and the image to be processed, where the preset color is different from the target color.
The image processing method according to the embodiment of the present application can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the electronic device 1000 of the embodiments of the present application includes one or more processors 200 and a memory 300. The memory 300 stores a computer program. When executed by the processor 200, the computer program realizes step 011, step 013, step 015, and step 017 of the image processing method according to the embodiment of the present application.
In the image processing method, the image processing apparatus 100, and the electronic device 1000, the first full-size image with the target color can be obtained by interpolating the pixel value of the target color at the position of the pixel that is not the target color, and then the second full-size image with the preset color can be obtained according to the first full-size image and the image to be processed, so that the demosaicing effect is realized, the resolution of the image of each color channel in the full-size mode is improved, and a basis is provided for the color restoration of multiple channels.
In particular, in some embodiments, an electronic device includes a housing and a camera module, which are combined. The camera module comprises an image sensor, and the image sensor comprises a photosensitive pixel array and a filter array. The photosensitive pixel array includes a plurality of photosensitive pixels. The optical filter array comprises a plurality of optical filter units, each optical filter unit comprises a plurality of optical filter groups, and each optical filter group comprises a first optical filter corresponding to one color in a first color space and a second optical filter corresponding to one color in a second color space. The first optical filter covers one photosensitive pixel, the second optical filter covers one photosensitive pixel, the photosensitive pixel can receive external light rays which penetrate through the corresponding optical filter and generate corresponding electric signals, and the image to be processed can be determined according to the electric signals generated by all the photosensitive pixels. The first color space may include Red (R), Green (Green, G), and blue (B). The second color space may include Magenta (M), Yellow (Y), and Cyan (Cyan, C).
Referring to fig. 4, in some embodiments, a filter set 400 includes a first filter set 402, two second filter sets 404 and a third filter set 406, where in the first filter set 402, the first filter is a red filter 4022, and the second filter is a magenta filter 4024; in the second filter set 404, the first filter is a green filter 4042, and the second filter is a yellow filter 4044; in the third filter group 406, the first filter is a blue filter 4062, and the second filter is a cyan filter 4064, that is, in one filter group 400, the ratio of the G filter 4042 to the Y filter 4044 is 25%, and the ratio of the R filter 4022, the B filter 4062, the C filter 4064, and the M filter 4024 is 12.5%. Four filter sets of filter set 400 are arranged in 2 x 2, wherein two second filter sets 404 are distributed along a third diagonal direction E1, and first filter set 402 and third filter set 406 are distributed along a fourth diagonal direction E2. Each filter set comprises two first filters and two second filters, and the four filters are arranged in 2 x 2, wherein the two first filters are arranged along a first diagonal direction F1, and the two second filters are arranged along a second diagonal direction F2. The first diagonal direction F1 and the second diagonal direction F2 are merely used to illustrate that the arrangement directions of the first filter and the second filter are not the same, and do not refer to fixed diagonal directions. Similarly, third diagonal direction E1 and fourth diagonal direction E2 are merely used to illustrate that two second filter set 404 do not coincide with the arrangement direction of first filter set 402 and third filter set 406, and do not refer to a fixed diagonal direction. The first diagonal direction F1 and the third diagonal direction E1 may refer to 45 ° oblique directions, and the second diagonal direction F2 and the fourth diagonal direction E2 may refer to 45 ° reverse oblique directions.
The image to be processed comprises a plurality of pixels, the color of each pixel is the same as the color of the corresponding optical filter, the color of the optical filter comprises a plurality of colors, and therefore the image to be processed comprises the pixels with the plurality of colors. In some embodiments, the image to be processed includes R pixels, G pixels, B pixels, M pixels, Y pixels, and C pixels.
The partial image is a part of the image to be processed. The current pixel is located in the center of the partial image. The size of the partial image includes, but is not limited to, 5 × 5,7 × 7, 9 × 9, 11 × 11, 13 × 13, and the like. In one example, the size of the partial image is 7 × 7 pixels, which can achieve better image processing results.
The total value of the pixel variance may be determined from the pixel values of the pixels in the local image.
The texture direction of the local image may be a direction in which a gradient of the pixel is minimum in a preset direction. Referring to fig. 5, in some embodiments, a direction is divided every 22.5 degrees within 0 to 180 degrees, which is denoted as E, AD, a, AU, N, DD, D, DU, W, and S, where E and W refer to horizontal directions and N and S refer to vertical directions, that is, the predetermined directions include a horizontal direction (E, W), a vertical direction (N, S), a 22.5 ° oblique direction (AD), a 22.5 ° oblique Direction (DU), a 45 ° oblique direction (a), a 45 ° oblique direction (D), a 67.5 ° oblique direction (AU), and a 67.5 ° oblique direction (DD). The first direction may include an oblique 22.5 ° direction (AD), a reverse oblique 22.5 ° Direction (DU), an oblique 45 ° direction (a), a reverse oblique 45 ° direction (D), an oblique 67.5 ° direction (AU), and a reverse oblique 67.5 ° direction (DD). It is to be understood that, in the above embodiments, the preset direction includes 8 directions, and in other embodiments, the preset direction may be divided into 16 directions or more, which is not limited herein.
The predetermined neural network may be a neural network obtained by training in advance. The target color may be any of a plurality of colors of the image to be processed. In some embodiments, the target color is the highest color in the image to be processed, such as G or Y.
The preset color may be a color other than the target color among the plurality of colors of the image to be processed. In one example, the image to be processed includes R, G, B, C, M, Y, and when the target color is G, the preset color includes R, B, C, M, Y.
It should be noted that the preset colors include a plurality of types, and the second full-size image determined in step 017 includes a plurality of types, each of the preset colors corresponding to one of the second full-size images, and only one of the preset colors exists in each of the second full-size images. The size of the first full-size image and the size of the second full-size image are both the same as the size of the image to be processed, and the number of pixels of the first full-size image and the number of pixels of the second full-size image are both the same as the number of pixels of the image to be processed.
In some embodiments, the image to be processed includes a plurality of pixel blocks, each pixel block includes a plurality of pixel units, each pixel unit includes two first pixels corresponding to one color in a first color space and two second pixels corresponding to one color in a second color space, the two first pixels are arranged along a first diagonal direction, the two second pixels are arranged along a second diagonal direction, and the target color is one color in the first color space or the second color space.
It can be understood that, compared with the to-be-processed image only including the color channel corresponding to the first color space in the related art, the to-be-processed image of the present application includes both the color channel corresponding to the first color space and the color channel corresponding to the second color space, and the increase of the number of the color channels means that the white balance reference information is doubled, so that the accuracy of gray point detection and light source analysis is greatly improved, and more accurate white balance judgment or other image processing functions are facilitated to be realized.
Specifically, each pixel block may include R pixels, G pixels, B pixels, M pixels, Y pixels, and C pixels, wherein the proportion of the G pixels and the Y pixels is each 25%, and the proportion of the R pixels, the B pixels, the C pixels, and the M pixels is each 12.5%. The first diagonal direction and the second diagonal direction are only used to describe that the arrangement directions of the first pixels and the second pixels are not consistent, and do not refer to fixed diagonal directions. The first diagonal direction may be a 45 ° oblique direction and the second diagonal direction may be a 45 ° reverse oblique direction. In other embodiments, the pixels in the pixel unit may be pixels of other colors and/or arranged in other manners, which is not limited herein.
In this embodiment, the color of a pixel refers to the color of a filter corresponding to the pixel, the color of a first pixel is the color of a first filter corresponding to the first pixel, and the color of a second pixel is the color of a second filter corresponding to the second pixel.
Referring to fig. 6, in some embodiments, the partial image includes a plurality of color channels corresponding to a first color space and a second color space, and step 011 includes:
0111: determining a pixel variance value of each color channel in the local image according to the pixel mean value of each color channel in the local image and the pixel value of each color channel;
0113: the sum of the pixel variance values of each color channel is calculated as the pixel variance total value of the partial image.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the first determination module 11 includes a first determination unit and a first calculation unit. The first determining unit is used for determining the pixel variance value of each color channel in the local image according to the pixel mean value of each color channel in the local image and the pixel value of each color channel. The first calculation unit is used for calculating the sum value of the pixel variance values of each color channel to serve as the pixel variance total value of the local image.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine a pixel variance value of each color channel in the local image according to a pixel mean value of each color channel in the local image and a pixel value of each color channel, and to calculate a sum of the pixel variance values of each color channel as a pixel variance total value of the local image.
In this way, the total pixel variance value of the local image can be calculated from the pixel variance values of the color channels in the local image.
In particular, the pixel variance value σcCan be represented by the following formula:
Figure BDA0003265255950000041
wherein c represents a color channel, N represents the total number of pixels of the partial image, pixel _ valuecRepresenting the pixel value, u, of the ith pixel in the local imagecRepresenting the pixel mean in the color channel c of the local image.
Go toStep by step, the total value of the pixel variance σ can be represented by the following formula: sigma ═ sigmac. In some embodiments, when the total pixel variance value is smaller than a preset threshold, determining that a local image with a current pixel as a center is a flat area, and interpolating a pixel value of a target color at the position of the current pixel directly according to a color ratio constancy relationship between the current pixel and the target color pixel; when the total pixel variance value is larger than or equal to a preset threshold value, determining a local image taking the current pixel as the center as a texture area, further calculating the texture direction of the local image, and selecting a corresponding method according to the texture direction to interpolate the pixel value of the target color at the position of the current pixel. The interpolation modes of the pixel values of the target color at the positions of the current pixels corresponding to the flat area and the texture area are different.
In some embodiments, the preset threshold may be dynamically set according to the screen brightness, for example, when the screen brightness is 100, the preset threshold may be set to 10, when the screen brightness is 200, the preset threshold may be set to 15, and the like.
Referring to fig. 7, in some embodiments, before step 015, the image processing method further includes:
019: and when the total pixel variance value is smaller than a preset threshold value, determining the pixel value of the target color at the position of the current pixel according to all pixels with the colors being the target color in the local image and all pixels with the colors being the same as the color of the current pixel.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the image processing apparatus 100 further includes a fourth determination module. And the fourth determining module is used for determining the pixel value of the target color at the position of the current pixel according to all pixels with the color being the target color and all pixels with the color being the same as that of the current pixel in the local image when the total pixel variance value is smaller than the preset threshold value.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine, when the total pixel variance value is smaller than the preset threshold, the pixel value of the target color at the position of the current pixel according to all pixels in the local image whose color is the target color and all pixels whose colors are the same as the color of the current pixel.
Thus, when the local image is characterized as a flat area, the pixel value of the target color at the position of the current pixel can be determined.
Specifically, the local image is characterized as a flat area if the total value of the pixel variance is smaller than a preset threshold.
In one example, if the current pixel is a red pixel and the target color is green, when the total pixel variance value is smaller than the preset threshold, the value of the green pixel at the position of the current pixel is determined according to all the green pixels and all the red pixels in the local image.
Referring to FIG. 8, in some embodiments, step 019 includes:
0191: determining a first color ratio constant according to a first pixel mean value of all pixels with the color of the target color in the local image and a second pixel mean value of all pixels with the color same as that of the current pixel in the local image;
0193: and determining the pixel value of the target color at the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the fourth determination module includes a second determination unit and a third determination unit. The second determining unit is used for determining a first color ratio constant according to a first pixel mean value of all pixels with the color of the target color in the local image and a second pixel mean value of all pixels with the color same as that of the current pixel in the local image. The third determining unit is used for determining the pixel value of the target color at the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine a first color ratio constant according to a first pixel average value of all pixels in the local image, the color of which is the target color, and a second pixel average value of all pixels in the local image, the color of which is the same as that of the current pixel, and is configured to determine a pixel value of the target color at the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
Therefore, when the local image is characterized as a flat area, the pixel value of the target color at the position of the current pixel can be directly determined according to the color ratio constancy relation of the local pixel.
Specifically, the first pixel mean value is the pixel mean value of all pixels in the local image whose color is the target color. The second pixel mean value, the pixel mean value of all pixels in the local image having the same color as the current pixel. The first color ratio constant is the ratio of the first pixel mean value to the second pixel mean value. Further, the product of the first color ratio constant and the pixel value of the current pixel may be used as the pixel value of the target color at the position of the current pixel.
In one example, the target color is green, the current pixel is a red pixel, and the coordinates of the current pixel are (5,5), the first color ratio constant can be expressed by the following formula: ratio _ RG is mean _ G/mean _ R, where mean _ G represents the first pixel mean value of all green pixels in the local image and mean _ R represents the second pixel mean value of all red pixels in the local image, so that the green pixel value at coordinate (5,5) is: g (5,5) ═ R (5,5) × ratio _ RG, R (5,5) denotes the pixel value of the current pixel.
Referring to fig. 9, in some embodiments, before step 015, the image processing method further includes:
021: and when the total pixel variance value is greater than or equal to the preset threshold and the texture direction of the local image is the second direction, determining the pixel value of the target color at the position of the current pixel according to the pixel with the color in the second direction as the target color.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the image processing apparatus 100 further includes a fifth determination module. The fifth determining module is configured to determine a pixel value of the target color at the position of the current pixel according to the pixel, of which the color in the second direction is the target color, when the total pixel variance value is greater than or equal to the preset threshold and the texture direction of the local image is the second direction.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine a pixel value of the target color at the position of the current pixel according to the pixel whose color in the second direction is the target color when the total pixel variance value is greater than or equal to the preset threshold and the texture direction of the local image is the second direction.
Thus, when the local image is characterized as a texture region and the texture direction is the second direction, the pixel value of the target color at the position of the current pixel can be determined.
Specifically, the total value of the pixel variance is greater than or equal to a preset threshold value, that is, the local image is characterized as a texture area. The second direction may be understood as a direction other than the first direction among the preset directions.
The following description will be made by taking as an example that the preset directions include a horizontal direction (E, W), a vertical direction (N, S), a 22.5 ° oblique direction (AD), a 22.5 ° oblique Direction (DU), a 45 ° oblique direction (a), a 45 ° oblique direction (D), a 67.5 ° oblique direction (AU), and a 67.5 ° oblique direction (DD), the first directions include a 22.5 ° oblique direction (AD), a 22.5 ° oblique Direction (DU), a 45 ° oblique direction (a), a 45 ° oblique direction (D), a 67.5 ° oblique direction (AU), and a 67.5 ° oblique direction (DD), and the second directions include a horizontal direction and a vertical direction. It is understood that in other embodiments, the second direction may be other predetermined directions, such as 45 ° oblique direction and 45 ° reverse oblique direction, which is not limited herein.
It can be understood that when the local image is characterized as a texture region and the texture direction is a horizontal direction or a vertical direction, an undesirable effect may occur due to the adoption of the preset neural network to interpolate the pixel value of the target color at the position of the current pixel, and therefore, in this case, the pixel value of the target color at the position of the current pixel can be determined according to the step 021, so that a better interpolation effect can be obtained.
In an example, when the current pixel is a red pixel, the target color is green, and the texture direction of the local image is a horizontal direction in the second direction, when the total pixel variance value is greater than or equal to a preset threshold, a green pixel value of a position where the current pixel is located is determined according to a green pixel in the horizontal direction in the local image.
Referring to fig. 10, taking the second direction as the horizontal direction or the vertical direction as an example for description, it can be understood that, for a current pixel whose color is not the target color, according to the total pixel variance value of the local image where the current pixel is located and the texture direction of the local image, a corresponding one of the above steps 013, 019, and 021 may be selected to determine the pixel value of the target color at the location where the current pixel is located.
Referring to FIG. 11, in some embodiments, step 021 includes:
0211: determining two pixels with the color closest to the current pixel in the second direction as the target color by taking the current pixel as the center;
0213: and taking the average value of the pixel values of the two pixels as the pixel value of the target color at the position of the current pixel.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the fifth determination module includes a fourth determination unit and a second calculation unit. The fourth determining unit is configured to determine two pixels, of which a color closest to the current pixel in the second direction is the target color, with the current pixel as a center. The second calculating unit is used for taking the average value of the pixel values of the two pixels as the pixel value of the target color at the position of the current pixel.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine, with the current pixel as a center, two pixels having a color closest to the current pixel in the second direction as a target color, and to use an average value of pixel values of the two pixels as a pixel value of the target color at a position where the current pixel is located.
Thus, when the local image is characterized as a texture region and the texture direction is the second direction, the pixel value of the target color at the position of the current pixel can be determined according to the two pixels, of which the color closest to the current pixel in the second direction is the target color.
Referring to fig. 12, in an example, the target color is green, the texture direction of the local image is a horizontal direction (E, W) in the second direction, the coordinate of the current pixel is (5,5), and with the current pixel as the center, two green pixels closest to the current pixel in the horizontal direction are determined to be G (5,3) and G (5,7), respectively, so that the green pixel value at the coordinate (5,5) can be represented as: g (5,5) ═ G (5,3) + G (5, 7))/2.
With continuing reference to fig. 12, in another example, the target color is green, the texture direction of the local image is a vertical direction (N, S) in the second direction, the coordinate of the current pixel is (5,5), and with the current pixel as the center, two green pixels closest to the current pixel in the vertical direction can be determined as G (3,5) and G (7,5), respectively, and then the green pixel value at the coordinate (5,5) can be expressed as: g (5,5) ═ G (3,5) + G (7, 5))/2.
Referring to fig. 13, in some embodiments, step 013 includes:
0131: determining a third full-size image with a target color according to the image to be processed and a preset neural network;
0133: and determining the pixel value of the target color at the position of the current pixel according to the third full-size image.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the second determination module 13 includes a fifth determination unit and a sixth determination unit. The fifth determining unit is used for determining a third full-size image with the target color according to the image to be processed and a preset neural network. And the sixth determining unit is used for determining the pixel value of the target color at the position of the current pixel according to the third full-size image.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine a third full-size image with a target color according to the image to be processed and the preset neural network, and is configured to determine a pixel value of the target color at a position where the current pixel is located according to the third full-size image.
Therefore, when the total pixel variance value is larger than or equal to the preset threshold value and the texture direction of the local image is the first direction, the preset neural network is adopted to determine the pixel value of the target color at the position of the current pixel, and a good effect can be obtained.
Specifically, the size of the third full-size image is the same as the size of the image to be processed, and the number of pixels of the third full-size image is the same as the number of pixels of the image to be processed. In step 0133, the pixel value of the pixel in the third full-size image with the same coordinate as the current pixel is taken as the pixel value of the target color at the position of the current pixel. It can be understood that although the color of each pixel in the third full-size image is the target color, the third full-size image cannot be directly used as the first full-size image, and only when the total pixel variance value of the local image corresponding to the current pixel is greater than or equal to the preset threshold and the texture direction is the first direction, the pixel value of the pixel in the third full-size image with the same coordinate as the current pixel can be used as the pixel value of the pixel of the target color at the position of the current pixel.
In an example, if the coordinate of the current pixel is (5,5), the total pixel variance value of the local image corresponding to the current pixel is equal to the preset threshold, and the texture direction of the local image corresponding to the current pixel is the first direction, the pixel value of the pixel with the coordinate of (5,5) in the third full-size image is taken as the pixel value of the target color at the position where the current pixel is located.
Referring to fig. 14, in some embodiments, the image to be processed includes a plurality of predetermined windows, each of the predetermined windows includes a plurality of pixel blocks, and step 0131 includes:
01311: merging pixels at the same position in each pixel block in a preset window to serve as input sub-images;
01313: and determining a third full-size image with the target color according to the input subgraph and the preset neural network.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the fifth determination unit includes a generation subunit and a determination subunit. The generation subunit is used for merging pixels at the same position in each pixel block in a preset window to serve as an input sub-image. The determining subunit is used for determining a third full-size image with the target color according to the input subgraph and the preset neural network.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to combine pixels at the same position in each pixel block within the preset window to serve as an input subgraph, and is configured to determine a third full-size image with a target color according to the input subgraph and a preset neural network.
In this manner, a third full-size image having the target color is determined based on the preset neural network.
Specifically, the size of the preset window is an integer multiple of the size of the pixel block, for example, when the size of the pixel block is 4 × 4, the size of the preset window may be 8 × 8, 16 × 16, 32 × 32, 64 × 64, and the like. It is understood that when the preset window has a size of 8 × 8, one preset window includes 4 pixel blocks; when the size of the preset window is 16 x 16, one preset window comprises 16 pixel blocks; when the size of the preset window is 32 × 32, one preset window comprises 64 pixel blocks; when the size of the preset window is 64 × 64, one preset window includes 256 pixel blocks. In some embodiments, the size of the predetermined window is not less than 32 × 32, which ensures that a better operation result is obtained.
The input sub-picture can be understood as a set of pixels in the same position of each pixel block in a preset window. Since each pixel block includes a plurality of pixels, a plurality of input subgraphs can be generated according to one preset window. The number of input sub-pictures is the same as the number of pixels in a block of pixels. The number of pixels of the same input subgraph is the same as that of pixel blocks in a preset window. Each pixel of the same input sub-picture is of the same color. And inputting all input subgraphs corresponding to one preset window into the preset neural network to obtain a part of third full-size images corresponding to the preset window, and performing the processing on all input subgraphs corresponding to each preset window of the image to be processed to obtain a complete third full-size image corresponding to the image to be processed.
With reference to fig. 15 and fig. 16, the following describes the technical solution of the above embodiment in detail by taking an example that the size of the pixel block is 4 × 4 and the size of the preset window is 12 × 12.
Fig. 15 is a schematic diagram of a pixel block 500, and in the example of fig. 15, one pixel block 500 includes one first pixel unit 502, two second pixel units 504, and one third pixel unit 506, and in the first pixel unit 502, the first pixel is a red pixel, and the second pixel is a magenta pixel; in the second pixel unit 504, the first pixel is a green pixel, and the second pixel is a yellow pixel; in the third pixel unit 506, the first pixel is a blue pixel, and the second pixel is a cyan pixel. The pixel block includes 16 positions, and according to the position, the red pixels may be denoted as R2 and R5, the magenta pixels may be denoted as M1 and M6, the green pixels may be denoted as G4, G7, G10, and G13, the yellow pixels may be denoted as Y3, Y8, Y9, and Y14, the blue pixels may be denoted as B12 and B15, and the cyan pixels may be denoted as C11 and C16.
Fig. 16 is a schematic diagram of a scene for generating an input subgraph according to preset windows, in the example of fig. 16, one preset window includes 9 pixel blocks. The 16 input subgraphs can be obtained by gathering the pixels at the same position in each pixel block into one input subgraph. Each input sub-picture comprises 9 pixels of the same color. Further, the 16 input sub-images are input into a preset neural network as input images, and a part of third full-size images corresponding to the preset window can be obtained.
Referring to fig. 17, in some embodiments, the predetermined neural network includes a convolutional neural network, the convolutional neural network includes a first network structure, a second network structure, a third network structure, a fourth network structure, a fifth network structure, a sixth network structure and a branch network structure, the first network structure, the third network structure, the fourth network structure, the sixth network structure and the branch network structure each include a convolution kernel with a size of 3 × 3, and the second network structure and the fifth network structure each include a convolution kernel with a size of 1 × 1.
Specifically, the output end of the first network structure is connected to the input end of the second network structure, the output end of the second network structure is connected to the input end of the third network structure, the output end of the third network structure is connected to the input end of the fourth network structure, the output end of the fourth network structure is connected to the input end of the fifth network structure, and the output end of the fifth network structure is connected to the input end of the sixth network structure. The first network structure comprises a convolution kernel of 3 x 3, the input of the first network structure is 16 input subgraphs, and the output of the first network structure is 256 first output subgraphs; the second network structure comprises a convolution kernel of 1 x 1, the input of the second network structure is 256 first output subgraphs, and the output of the second network structure is 128 second output subgraphs; the third network structure comprises a convolution kernel of 3 x 3, the input of the third network structure is 128 second output subgraphs, and the output of the third network structure is 128 third output subgraphs; the fourth network structure comprises a convolution kernel of 3 x 3, the input of the fourth network structure is 128 third output subgraphs, and the output of the fourth network structure is 128 fourth output subgraphs; the fifth network structure comprises a convolution kernel of 1 x 1, the input of the fifth network structure is 128 fourth output subgraphs, and the output of the fifth network structure is 128 fifth output subgraphs; the sixth network structure includes 3 × 3 convolution kernels, the input of the sixth network structure is 128 fifth output subgraphs, and the output of the sixth network structure is 16 sixth output subgraphs.
The branch network structure connects the input of the first network structure and the output of the sixth network structure, the branch network structure including a convolution kernel of size 3 x 3. And after the 16 input subgraphs of the input convolutional neural network are processed by the branch network structure, accumulating the 16 input subgraphs to the 16 sixth output subgraphs output by the sixth network structure and outputting the convolutional neural network. Therefore, based on the structure of the residual error network, the fitting capacity of the convolutional neural network is improved.
Referring to FIG. 18, in some embodiments, the target color comprises green, and the predetermined colors comprise red, blue, cyan, magenta, and yellow, and step 017 comprises:
0171: taking the first full-size image as a guide image, and interpolating red pixel values of positions of other pixels except red pixels in the image to be processed in a filtering mode to obtain a red second full-size image;
0173: taking the first full-size image as a guide image, and interpolating blue pixel values of positions of other pixels except the blue pixel in the image to be processed in a filtering mode to obtain a blue second full-size image;
0175: taking the first full-size image as a guide image, and interpolating cyan pixel values of positions of other pixels except the cyan pixel in the image to be processed in a filtering mode to obtain a cyan second full-size image;
0177: taking the first full-size image as a guide image, and interpolating magenta pixel values of positions of other pixels except for magenta pixels in the image to be processed in a filtering mode to obtain a magenta second full-size image;
0179: and taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixel in the image to be processed in a filtering mode to obtain a yellow second full-size image.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the third determining module 17 includes a first filtering unit, a second filtering unit, a third filtering unit, a fourth filtering unit, and a fifth filtering unit. The first filtering unit is used for taking the first full-size image as a guide image and interpolating red pixel values of positions of other pixels except red pixels in the image to be processed in a filtering mode to obtain a red second full-size image. The second filtering unit is used for taking the first full-size image as a guide image and interpolating blue pixel values of positions of other pixels except the blue pixel in the image to be processed in a filtering mode to obtain a blue second full-size image. The third filtering unit is used for taking the first full-size image as a guide image and interpolating cyan pixel values of positions of other pixels except the cyan pixel in the image to be processed in a filtering mode to obtain a cyan second full-size image. The fourth filtering unit is used for taking the first full-size image as a guide image and interpolating magenta pixel values of positions of other pixels except the magenta pixel in the image to be processed in a filtering mode to obtain a magenta second full-size image. The fifth filtering unit is used for taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixel in the image to be processed in a filtering mode to obtain a yellow second full-size image.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to take the first full-size image as a guide image, interpolate red pixel values of positions of other pixels except for a red pixel in the image to be processed in a filtering manner to obtain a red second full-size image, take the first full-size image as the guide image, interpolate blue pixel values of positions of other pixels except for the blue pixel in the image to be processed in a filtering manner to obtain a blue second full-size image, take the first full-size image as the guide image, interpolate cyan pixel values of positions of other pixels except for the cyan pixel in the image to be processed in a filtering manner to obtain a cyan second full-size image, take the first full-size image as the guide image, interpolate magenta pixel values of positions of other pixels except for the magenta pixel in the image to be processed in a filtering manner to obtain a magenta second full-size image, and the second full-size image is used for taking the first full-size image as a guide image and interpolating yellow pixel values of positions of other pixels except the yellow pixel in the image to be processed in a filtering mode to obtain a yellow second full-size image.
In this manner, in the case where the first full-size image having the target color has been obtained, a plurality of kinds of second full-size images having preset colors can be obtained by means of filtering.
In particular, the manner of filtering may include guided filtering or joint bilateral filtering. The guiding filtering and the joint bilateral filtering are two different fusion strategies, and both of the guiding filtering and the joint bilateral filtering can be used for guiding a color channel to be interpolated and filling up missing pixel values. The following describes the embodiments of the present application in detail by taking the joint bilateral filtering as an example. It is to be understood that the technical solution of the embodiments of the present application may also be implemented by using guided filtering, and is not limited herein.
With reference to fig. 19, the image corresponding to the color channel of each preset color (for example, magenta) in the image to be processed is respectively used as the image I to be filtered, the first full-size image is used as the guide image I', and the second full-size image (the output image J) is obtained after the joint bilateral filtering. The process of joint bilateral filtering can be represented by the following equation:
Figure BDA0003265255950000091
Figure BDA0003265255950000092
wherein k isp=∑q∈Ωf(||p-q||)g(||Ip′-Iq′||),JpTo output pixel values of an image, kpFor the sum of the weights, Ω is a filtering window (which may be 7 × 7), p is the coordinate of the pixel point to be filtered in the image to be filtered, q is the coordinate of the pixel point in the filtering window in the image to be filtered, and IqFor pixel values corresponding to q points, Ip' for guiding the pixel value in the image corresponding to the pixel point to be filtered, Iq' is the pixel value in the guide image corresponding to the q point. f represents the weight corresponding to each coordinate of the filtering window, and is fixed, and the weight is larger when the filtering window is closer to the center. g represents the weight of the difference between the pixel at other positions and the central pixel, and the larger the difference, the smaller the weight.
Further, taking the color channel of magenta as an example, if the coordinate of the pixel to be filtered is (i, j), the pixel value interpolation of the magenta pixel at the coordinate (i, j) can be expressed as: m (I, j) ═ G (I, j) × means M/means G, where means G ═ sum (sum (HF. I')), and means M ═ sum (sum (HF. I)). I' denotes the G pixel window (which may be 7 x 7 size) of the guide image. I denotes a window of M pixels (which may be 7 x 7 in size) of the image to be filtered, where there are no M pixels, the matrix value is 0. meanG denotes obtaining a weighted sum of G pixel parts. meanM denotes obtaining a weighted sum of M pixel parts. H denotes a distance weight matrix. F denotes a pixel difference weight matrix, which is a dot product of the F and g matrices. HF represents the dot product of H and F.
Referring to fig. 20, in some embodiments, after the step 011, the image processing method further includes:
023: determining a pixel gradient value of a local image in a preset direction;
025: and taking the preset direction with the minimum pixel gradient value as the texture direction of the local image.
The image processing method according to the above embodiment can be realized by the image processing apparatus 100 according to the present embodiment. Specifically, the image processing apparatus 100 further includes a sixth determination module and a seventh determination module. The sixth determining module is used for determining the pixel gradient value of the local image in the preset direction. The seventh determining module is configured to use the preset direction with the smallest pixel gradient value as the texture direction of the local image.
The image processing method according to the above embodiment can be implemented by the electronic device 1000 according to the embodiment of the present application. Specifically, the processor 200 is configured to determine a pixel gradient value of the local image in a preset direction, and to use the preset direction in which the pixel gradient value is minimum as a texture direction of the local image.
In this way, the direction in which the gradient value of the pixel is smallest in the preset direction can be determined.
Specifically, when the filter set is arranged as shown in fig. 4, the predetermined directions may include a horizontal direction (E, W), a vertical direction (N, S), a 22.5 ° oblique direction (AD), a 22.5 ° oblique Direction (DU), a 45 ° oblique direction (a), a 45 ° oblique direction (D), a 67.5 ° oblique direction (AU), and a 67.5 ° oblique direction (DD). The first direction may include a 45 ° oblique direction and a 45 ° reverse oblique direction. The second direction is other than the first direction in the preset direction, and when the first direction includes a 45-degree oblique direction and a 45-degree oblique direction, the second direction may include a horizontal direction, a vertical direction, a 22.5-degree oblique direction, a 67.5-degree oblique direction, and a 67.5-degree oblique direction.
The pixel gradient value may be determined by accumulating absolute values of differences between the plurality of pixels. For example, referring to fig. 5, the pixel gradient value grad _ E in the horizontal direction E can be determined by the following formula: grad _ E ═ abs (raw (5,5) -raw (5,9)) + abs (raw (4,5) -raw (4,9)) + abs (raw (6,5) -raw (6,9)) + abs (raw (6,5) -raw (5,6) -raw (5,8)) + abs (raw (4,5) -raw (4,6) + raw (2,6)) + abs (raw (6,5) -raw (6,6) + raw (6, 7)). The pixel gradient value grad _ N in the vertical direction N can be determined by the following formula: grad _ N ═ abs (raw (5,5) -raw (1,5)) + abs (raw (5,6) -raw (1,6)) + abs (raw (5,4) -raw (1,4)) + abs (raw (5,5) -raw (2, 5) -raw (4,5)) + abs (raw (5,6) -raw (3,6) -raw (2,6)) + abs (5,4) -raw (3, 4) -raw (2, 4)). The pixel gradient value grad _ a inclined to the 45 ° direction a can be determined by the following formula: grad _ a ═ abs (raw (5,5) -raw (1,9)) + abs (raw (5,5) -raw (9,1)) + abs (raw (4,6) -raw (6,4)) + abs (raw (7,3) -raw (3,7)) +
abs (raw (7,4) -raw (4,7)) + abs (raw (6,3) -raw (3,6)) + abs (raw (8,4) -raw (4, 8)). The pixel gradient value grad _ AD inversely inclined to the 45 ° direction AD can be determined by the following formula: grad _ AD ═ abs (raw (5,4) -raw (4,9)) + abs (raw (6,3) -raw (5,8)) + abs (raw (6,4) -raw (5,7)) + abs (raw (5,3) -raw (4,6)) +
abs(raw(5,5)-raw(6,2))+abs(raw(4,4)-raw(3,7))+abs(raw(4,3)-raw(3,8))。
In some embodiments, let Grad [ Grad _ N Grad _ S Grad _ E Grad _ W Grad _ AD Grad _ a Grad _ AU Grad _ DD Grad _ D Grad _ DU ], calculate [ Mingrad, Dir ] ═ min (Grad), and if Dir ═ 1 or 2 is obtained, the texture direction may be determined to be the vertical direction; if Dir is found to be 3 or 4, the texture direction can be determined to be the horizontal direction; if Dir is 5, the grain direction can be determined to be a 22.5 ° oblique direction; if Dir is 6, the texture direction can be determined to be a 45-degree oblique direction; if Dir is found to be 7, the grain direction can be determined to be a 67.5 ° oblique direction; if Dir is found to be 8, the grain direction can be determined to be the anticline 67.5 ° direction; if Dir is 9, the texture direction can be determined to be a reverse 45-degree direction; if Dir is found to be 10, it can be determined that the grain direction is a 22.5 ° reverse skew direction.
It should be noted that the specific numerical values mentioned above are only for illustrating the implementation of the present application in detail and should not be construed as limiting the present application. In other examples or embodiments or examples, other values may be selected according to the application and are not specifically limited herein.
The computer-readable storage medium of the embodiments of the present application stores thereon a computer program that, when executed by a processor, implements the steps of the image processing method of any of the embodiments described above.
For example, in the case where the program is executed by a processor, the steps of the following image processing method are implemented:
011: when the color of a current pixel in an image to be processed is not a target color, determining a pixel variance total value of a local image taking the current pixel as a center;
013: when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining the pixel value of the target color at the position of the current pixel according to a preset neural network;
015: respectively taking each pixel in the image to be processed as a current pixel and processing to obtain a first full-size image with a target color;
017: and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
It will be appreciated that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like. The Processor may be a central processing unit, or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, or the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (15)

1. An image processing method, characterized in that the image processing method comprises:
when the color of a current pixel in an image to be processed is not a target color, determining a pixel variance total value of a local image taking the current pixel as a center;
when the total pixel variance value is larger than or equal to a preset threshold value and the texture direction of the local image is a first direction, determining the pixel value of the target color at the position of the current pixel according to a preset neural network;
respectively taking each pixel in the image to be processed as the current pixel and processing to obtain a first full-size image with the target color;
and determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
2. The method of claim 1, wherein the local image comprises a plurality of color channels corresponding to the first color space and the second color space, and wherein determining a total value of pixel variance for the local image centered on the current pixel comprises:
determining a pixel variance value of each color channel in the local image according to the pixel mean value of each color channel in the local image and the pixel value of each color channel;
calculating a sum of the pixel variance values for each of the color channels as the pixel variance total value of the partial image.
3. The image processing method according to claim 1, wherein before said individually taking each pixel in the image to be processed as the current pixel and processing it to obtain a first full-size image with the target color, the image processing method further comprises:
and when the total pixel variance value is smaller than the preset threshold value, determining the pixel value of the target color at the position of the current pixel according to all pixels with the colors of the target color and all pixels with the colors identical to the color of the current pixel in the local image.
4. The method according to claim 3, wherein the determining the pixel value of the target color at the position of the current pixel according to all pixels in the local image whose color is the target color and all pixels whose colors are the same as the color of the current pixel comprises:
determining a first color ratio constant according to a first pixel mean value of all pixels with the color of the target color in the local image and a second pixel mean value of all pixels with the color same as that of the current pixel in the local image;
and determining the pixel value of the target color at the position of the current pixel according to the first color ratio constant and the pixel value of the current pixel.
5. The image processing method according to claim 1, wherein the determining the pixel value of the target color at the position of the current pixel according to a preset neural network comprises:
determining a third full-size image with the target color according to the image to be processed and the preset neural network;
and determining the pixel value of the target color at the position of the current pixel according to the third full-size image.
6. The image processing method according to claim 5, wherein the image to be processed comprises a plurality of preset windows, each of the preset windows comprising a plurality of the pixel blocks, and the determining a third full-size image having the target color according to the image to be processed and the preset neural network comprises:
merging pixels at the same position in each pixel block in the preset window to serve as input sub-images;
determining the third full-size image with the target color according to the input subgraph and the preset neural network.
7. The image processing method of claim 6, wherein the preset neural network comprises a convolutional neural network comprising a first network structure, a second network structure, a third network structure, a fourth network structure, a fifth network structure, a sixth network structure, and a branch network structure, wherein the first network structure, the third network structure, the fourth network structure, the sixth network structure, and the branch network structure each comprise a convolution kernel of size 3 x 3, and wherein the second network structure and the fifth network structure each comprise a convolution kernel of size 1 x 1.
8. The image processing method according to claim 1, wherein before said individually taking each pixel in the image to be processed as the current pixel and processing it to obtain a first full-size image with the target color, the image processing method further comprises:
and when the total pixel variance value is greater than or equal to the preset threshold and the texture direction of the local image is a second direction, determining the pixel value of the target color at the position of the current pixel according to the pixel with the color in the second direction as the target color.
9. The method according to claim 8, wherein the determining a pixel value of the target color at a position of the current pixel for the pixel of the target color according to the color in the second direction comprises:
determining two pixels of which the color closest to the current pixel in the second direction is the target color by taking the current pixel as a center;
and taking the average value of the pixel values of the two pixels as the pixel value of the target color at the position of the current pixel.
10. The image processing method according to claim 1, wherein the target color comprises green, the preset colors comprise red, blue, cyan, magenta, and yellow, and the determining a second full-size image having a preset color from the first full-size image and the image to be processed comprises:
taking the first full-size image as a guide image, and interpolating red pixel values of positions of other pixels except red pixels in the image to be processed in a filtering mode to obtain a red second full-size image;
taking the first full-size image as a guide image, and interpolating blue pixel values of positions of other pixels except for a blue pixel in the image to be processed in a filtering mode to obtain a blue second full-size image;
taking the first full-size image as a guide image, and interpolating cyan pixel values of positions of other pixels except cyan pixels in the image to be processed in a filtering mode to obtain a cyan second full-size image;
taking the first full-size image as a guide image, and interpolating magenta pixel values of positions of other pixels except for magenta pixels in the image to be processed in a filtering manner to obtain a magenta second full-size image;
and taking the first full-size image as a guide image, and interpolating yellow pixel values of positions of other pixels except the yellow pixel in the image to be processed in a filtering mode to obtain a yellow second full-size image.
11. The image processing method according to claim 1, wherein after determining a pixel variance of a local image centered on a current pixel when a color of the current pixel in an image to be processed is not a target color, the image processing method further comprises:
determining a pixel gradient value of the local image in a preset direction;
and taking the preset direction with the minimum pixel gradient value as the texture direction of the local image.
12. The image processing method according to any one of claims 1 to 11, wherein the image to be processed includes a plurality of pixel blocks, each of the pixel blocks includes a plurality of pixel units, each of the pixel units includes two first pixels corresponding to one color in a first color space and two second pixels corresponding to one color in a second color space, the two first pixels are arranged in a first diagonal direction, the two second pixels are arranged in a second diagonal direction, and the target color is one of the first color space and the second color space.
13. An image processing apparatus characterized by comprising:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining the pixel variance total value of a local image taking a current pixel as a center when the color of the current pixel in an image to be processed is not a target color;
a second determining module, configured to determine, according to a preset neural network, a pixel value of the target color at a position where the current pixel is located when the total pixel variance value is greater than or equal to a preset threshold and a texture direction of the local image is a first direction;
the processing module is used for respectively taking each pixel in the image to be processed as the current pixel and processing the current pixel to obtain a first full-size image with the target color;
and the third determining module is used for determining a second full-size image with a preset color according to the first full-size image and the image to be processed, wherein the preset color is different from the target color.
14. An electronic device, characterized in that the electronic device comprises one or more processors and a memory, the memory storing a computer program which, when executed by the processors, implements the steps of the image processing method of any one of claims 1-12.
15. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 12.
CN202111084966.4A 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium Active CN113781350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111084966.4A CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111084966.4A CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113781350A true CN113781350A (en) 2021-12-10
CN113781350B CN113781350B (en) 2023-11-24

Family

ID=78844449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111084966.4A Active CN113781350B (en) 2021-09-16 2021-09-16 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113781350B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201322450D0 (en) * 2013-12-18 2014-02-05 Imagination Tech Ltd Defence pixel fixing
WO2015093253A1 (en) * 2013-12-20 2015-06-25 株式会社メガチップス Pixel interpolation apparatus, image capture apparatus, program, and integrated circuit
CN106530252A (en) * 2016-11-08 2017-03-22 北京小米移动软件有限公司 Image processing method and device
CN109636753A (en) * 2018-12-11 2019-04-16 珠海奔图电子有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110189339A (en) * 2019-06-03 2019-08-30 重庆大学 The active profile of depth map auxiliary scratches drawing method and system
CN112053417A (en) * 2019-06-06 2020-12-08 西安诺瓦星云科技股份有限公司 Image processing method, apparatus and system, and computer-readable storage medium
CN112598758A (en) * 2020-10-22 2021-04-02 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112801882A (en) * 2019-11-14 2021-05-14 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112843697A (en) * 2021-02-02 2021-05-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and computer equipment
CN112999654A (en) * 2021-03-04 2021-06-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113362261A (en) * 2020-03-04 2021-09-07 杭州海康威视数字技术股份有限公司 Image fusion method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201322450D0 (en) * 2013-12-18 2014-02-05 Imagination Tech Ltd Defence pixel fixing
WO2015093253A1 (en) * 2013-12-20 2015-06-25 株式会社メガチップス Pixel interpolation apparatus, image capture apparatus, program, and integrated circuit
CN106530252A (en) * 2016-11-08 2017-03-22 北京小米移动软件有限公司 Image processing method and device
CN109636753A (en) * 2018-12-11 2019-04-16 珠海奔图电子有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110189339A (en) * 2019-06-03 2019-08-30 重庆大学 The active profile of depth map auxiliary scratches drawing method and system
CN112053417A (en) * 2019-06-06 2020-12-08 西安诺瓦星云科技股份有限公司 Image processing method, apparatus and system, and computer-readable storage medium
CN112801882A (en) * 2019-11-14 2021-05-14 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN113362261A (en) * 2020-03-04 2021-09-07 杭州海康威视数字技术股份有限公司 Image fusion method
CN112598758A (en) * 2020-10-22 2021-04-02 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112843697A (en) * 2021-02-02 2021-05-28 网易(杭州)网络有限公司 Image processing method and device, storage medium and computer equipment
CN112999654A (en) * 2021-03-04 2021-06-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙国栋等: "基于灰度直方图反向投影的织物疵点图像分割", 《制造业自动化》 *
杨磊等: "结合纹理特征的Camshift目标跟踪算法研究", 《电子设计工程》 *

Also Published As

Publication number Publication date
CN113781350B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
KR101663871B1 (en) Method and associated apparatus for correcting color artifact of image
US8248496B2 (en) Image processing apparatus, image processing method, and image sensor
CN107623844B (en) Determination of color values of pixels at intermediate positions
CN113170061B (en) Image sensor, imaging device, electronic apparatus, image processing system, and signal processing method
JP2008070853A (en) Compensation method of image array data
Chen et al. Effective demosaicking algorithm based on edge property for color filter arrays
CN110557584A (en) image processing method and device, and computer readable storage medium
JP2000134634A (en) Image converting method
US8798398B2 (en) Image processing apparatus
CN108307162B (en) Efficient and flexible color processor
US20190355105A1 (en) Method and device for blind correction of lateral chromatic aberration in color images
US10783608B2 (en) Method for processing signals from a matrix for taking colour images, and corresponding sensor
JP6415094B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN109672810B (en) Image processing apparatus, image processing method, and storage medium
CN113781350B (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2020091910A (en) Image processing device, image processing method and program
CN103259960A (en) Data interpolation method and device and image output method and device
JP5484015B2 (en) Imaging apparatus, imaging method, and program
CN112237002A (en) Image processing method and apparatus
US8068145B1 (en) Method, systems, and computer program product for demosaicing images
CN115187487A (en) Image processing method and device, electronic device and storage medium
CN113781349A (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2009239772A (en) Imaging device, image processing device, image processing method, and program
JP2014110507A (en) Image processing device and image processing method
Susan et al. Edge strength based fuzzification of colour demosaicking algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant