CN113888560A - Method, apparatus, device and storage medium for processing image - Google Patents
Method, apparatus, device and storage medium for processing image Download PDFInfo
- Publication number
- CN113888560A CN113888560A CN202111150932.0A CN202111150932A CN113888560A CN 113888560 A CN113888560 A CN 113888560A CN 202111150932 A CN202111150932 A CN 202111150932A CN 113888560 A CN113888560 A CN 113888560A
- Authority
- CN
- China
- Prior art keywords
- image
- determining
- color
- channel
- channel gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000011218 segmentation Effects 0.000 claims abstract description 38
- 238000004040 coloring Methods 0.000 claims abstract description 36
- 239000011159 matrix material Substances 0.000 claims description 42
- 230000004927 fusion Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 13
- 238000000638 solvent extraction Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241001481833 Coryphaena hippurus Species 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure provides a method, an apparatus, a device and a storage medium for processing an image, which relate to the field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and can be used in an image processing scene. The specific implementation scheme is as follows: acquiring a single-channel gray image; acquiring at least one reference image of a single-channel gray image; carrying out example segmentation on the single-channel gray-scale image and the at least one reference image respectively, and determining a first example mask image set corresponding to the single-channel gray-scale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result; and determining a color image corresponding to the single-channel gray image based on the single-channel gray image, the at least one reference image, the first example mask image set and the second example mask image set. The implementation mode can color by utilizing the example segmentation results of the single-channel gray level image and the reference image, and the coloring effect is improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of computer vision and deep learning technologies, and more particularly, to a method, an apparatus, a device, and a storage medium for processing an image, which can be used in an image processing scenario.
Background
Include the work of coloring black and white video or single channel grey level video in achromatic color video's repair work, the work of coloring makes black and white video or single channel grey level video's colour abundanter, has improved achromatic color video's expressive force, can also promote spectator's the experience of watching simultaneously. The coloring effect of the current coloring scheme needs to be improved.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for processing an image.
According to a first aspect, there is provided a method for processing an image, comprising: acquiring a single-channel gray image; acquiring at least one reference image of a single-channel gray image; carrying out example segmentation on the single-channel gray-scale image and the at least one reference image respectively, and determining a first example mask image set corresponding to the single-channel gray-scale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result; and determining a color image corresponding to the single-channel gray image based on the single-channel gray image, the at least one reference image, the first example mask image set and the second example mask image set.
According to a second aspect, there is provided an apparatus for processing an image, comprising: a first acquisition unit configured to acquire a single-channel grayscale image; a second acquisition unit configured to acquire at least one reference image of a single-channel grayscale image; the example segmentation unit is configured to perform example segmentation on the single-channel gray-scale image and the at least one reference image respectively, and determine a first example mask image set corresponding to the single-channel gray-scale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result; and the image coloring unit is configured to determine a color image corresponding to the single-channel gray image based on the single-channel gray image, the at least one reference image, the first example mask image set and the second example mask image set.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in the first aspect.
According to a fifth aspect, a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first aspect.
According to the technology disclosed by the invention, the coloring can be carried out by utilizing the example segmentation results of the single-channel gray image and the reference image, and the coloring effect is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing an image according to the present disclosure;
FIG. 3 is a schematic illustration of an application scenario of a method for processing an image according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of a method for processing an image according to the present disclosure;
FIG. 5 is a schematic block diagram of one embodiment of an apparatus for processing images according to the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a method for processing an image of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method for processing images or the apparatus for processing images of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image processing application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, e-book readers, car computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a backend server that colors a black-and-white image or a single-channel grayscale image provided by the terminal devices 101, 102, 103. The background server can color the black-and-white image or the single-channel gray image by using a pre-trained model, and feed back the colored image obtained after coloring to the terminal devices 101, 102, and 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for processing an image provided by the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105. Accordingly, the apparatus for processing images may be provided in the terminal devices 101, 102, 103, or in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing an image in accordance with the present disclosure is shown. The method for processing the image of the embodiment comprises the following steps:
In this embodiment, the execution subject of the method for processing an image may acquire a single-channel grayscale image in various ways. For example, a single-channel grayscale image may be obtained by local acquisition or network acquisition. The single-channel gray scale image may be a video frame in a black-and-white video, a single black-and-white image, or a video frame or a gray scale image in a gray scale video.
In this embodiment, the executing subject may further acquire at least one reference image of the single-channel grayscale image. Here, the reference image may be a color image, and the reference image includes an object in a single-channel grayscale image. For example, a dolphin is included in the single-channel grayscale image, and a dolphin may be included in the reference image. It will be appreciated that two or more reference images may be used if the values in a single reference image include portions of the objects in a single-channel grayscale image. The reference image may be acquired by the execution subject from a preset path, for example, from a folder in which a single-channel grayscale image is located. Or the execution subject may retrieve from the network via image retrieval. In image retrieval, the subject may use a color image having the highest degree of matching with the single-channel grayscale image as a reference image.
After the single-channel gray-scale image and each reference image are acquired, the execution body can respectively perform instance segmentation on the single-channel gray-scale image and at least one reference image. Specifically, the executing body may perform instance segmentation on the single-channel grayscale image and the at least one reference image respectively by using a pre-trained instance segmentation network. By the example segmentation, the example segmentation result of the single-channel gray-scale image and the example segmentation result of each reference image can be obtained respectively.
The execution subject may further determine a first set of example mask images corresponding to the single-channel grayscale image and a second set of example mask images corresponding to the at least one reference image based on the example segmentation results. Specifically, the execution body may set the value of the pixel outside the outline of the instance to 0 and the value of the pixel inside the outline of the instance to 1, according to each instance divided. Thereby obtaining an instance mask image corresponding to each instance. In this embodiment, each example mask image corresponding to the single-channel grayscale image may be referred to as a first example mask image set, and each example mask image corresponding to the reference image may be referred to as a second example mask image set. It will be appreciated that if the reference image comprises a plurality of sheets, a plurality of sets of second example mask images may be included.
And 204, determining a color image corresponding to the single-channel gray image based on the single-channel gray image, the at least one reference image, the first example mask image set and the second example mask image set.
After the execution main body obtains each first example mask image and each second example mask image, the single-channel gray-scale image and each reference image can be combined to color the single-channel gray-scale image, and a color image corresponding to the single-channel gray-scale image is determined. Specifically, the execution body may convolve the single-channel grayscale image with each of the first example mask images, and convolve each of the reference images with the corresponding second example mask images. And carrying out similarity calculation on the convolution image corresponding to the single-channel gray image and the convolution image corresponding to each reference image. The similarity includes a similarity of each instance in the single-channel grayscale image to an instance in the reference image. And determining the color corresponding to each instance in each single-channel gray image according to the similarity. In particular, for each instance in a single-channel grayscale image, the executive may determine the color of the instance from the color of the instance with which the instance has the greatest similarity. And coloring the single-channel gray image by using the colors to obtain a color image corresponding to the single-channel gray image.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for processing an image according to the present disclosure is shown. In the application scenario of fig. 3, the server 301 acquires a single-channel grayscale image and simultaneously acquires a reference image of the single-channel grayscale image. After the single-channel gray-scale image and the reference image are subjected to example segmentation, example segmentation results of the single-channel gray-scale image and the reference image are respectively obtained, and a first example mask image set corresponding to the single-channel gray-scale image and a second example mask image set corresponding to the reference image are further obtained. And finally, obtaining the color of each pixel in the single-channel gray image based on the single-channel gray image, the reference image, the first example mask image set and the second example mask image set, and coloring the single-channel gray image to obtain a color image corresponding to the single-channel gray image.
The method for processing the image, provided by the above embodiment of the disclosure, can color by using the example segmentation results of the single-channel grayscale image and the reference image, thereby improving the coloring effect.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for processing an image according to the present disclosure is shown. As shown in fig. 4, the method of the present embodiment may include the following steps:
In this embodiment, the execution subject may perform image retrieval in a preset color image library by using the single-channel grayscale image after obtaining the single-channel grayscale image, and determine the similarity between the single-channel grayscale image and each color image in the color image library. Then, at least one reference image is determined according to the similarity. Specifically, the execution subject may use a color image with the highest similarity as the reference image.
In some optional implementation manners of this embodiment, after the execution main body determines the color image with the highest similarity according to the similarity, the execution main body may check the determined color image. Namely, whether the determined color image contains the object in the single-channel gray image is judged. And if so, taking the determined color image as a reference image. If not, the executing subject may further perform image retrieval to determine other color images. It will be appreciated that other color images contain objects not included in the previously determined color image and that such objects are located in a single-channel gray scale image. And the number of instances in all reference images may be greater than the number of instances in a single-channel grayscale image.
In this embodiment, for each example segmented in the single-channel grayscale image, the executing entity may determine a first example mask image corresponding to the example, so as to obtain a first example mask image set. It will be appreciated that the number of first example mask images in the first example mask image set is the same as the number of examples included in the single-channel grayscale image. Similarly, for each reference image, the executing entity may determine a corresponding second example mask image for each example segmented in the reference image, thereby obtaining a second example mask image set. It will be appreciated that the number of second example mask image sets is the same as the number of reference images.
In this embodiment, the execution body may perform at least one image processing on the single-channel grayscale image and each first example mask image in the first example mask image set to obtain a plurality of images. Then, the multiple images are fused to obtain a first fused image.
In some optional implementations of this embodiment, the step 404 may be specifically implemented by the following steps not shown in fig. 4: multiplying the single-channel gray level image by each first example mask image according to pixels respectively to determine each first processing image; and fusing the first processing images to obtain a first fused image.
In this implementation, the execution subject may multiply the single-channel grayscale image by each of the first example mask images by pixel, respectively, to obtain a plurality of first processed images. The executing agent may then fuse the first processed images to obtain a first fused image. The fusion here may be performed by stitching or by adding the first processed images in pixels.
Similarly, the executing entity may perform similar processing on each reference image, that is, perform at least one image processing on each reference image and each second example mask image in the corresponding second example mask image set, and then fuse the obtained images to obtain a second fused image.
In some optional implementations of this embodiment, the step 405 may be specifically implemented by the following steps not shown in fig. 4: for each reference image, multiplying the reference image by each pixel with the corresponding second example mask image to determine second processed images; and fusing the second processed images to obtain a second fused image.
In this implementation, the executing entity may multiply each reference image by each corresponding second example mask image by pixel to determine each second processed image. Then, the execution subject may fuse the second processed images to obtain a second fused image. The fusion operation here may be the same as that of the first processed image.
And 406, determining a color image corresponding to the single-channel gray image according to the first fusion image, the second fusion image and the pre-trained coloring model.
After determining the first fused image and the second fused image, the executing body may input the first fused image and the second fused image to a pre-trained coloring model, where an output of the coloring model is a color image corresponding to a single-channel grayscale image. The coloring model may be a convolutional neural network or the like. The input of the coloring model can be a first fused image and a second fused image, and the output can be a color image.
In some optional implementations of the present embodiment, the coloring model may include a first sub-model and a second sub-model. The above step 406 may be implemented by:
In this implementation, the executing entity may input the first fused image and the second fused image into the first sub-model, and the output of the first sub-model may be a coarse color image. Here, the rough color image may be understood as an image in which an example portion has a color and a portion other than the example portion has no determined color. Alternatively, an image in which a part of pixels have a color and a part of pixels have no color can be understood. Alternatively, it can be understood that the colors of some regions are uniform, resulting in a poor image quality. In some specific applications, the first sub-model may be a Non-Local network (a self-attention model proposed in CVPR 2018).
The execution subject may further input the obtained coarse color image into a second sub-model, where an output of the second sub-model is the color image optimized for the coarse color image. The second submodel may be an encoding-decoding network comprising an encoder for performing convolution and pooling operations on the output of the first submodel and a decoder for performing deconvolution and upsampling operations on the output of the encoder. The second sub-model may remove noise in the coarse color image, thereby optimizing the color matching of the coarse color image.
In some optional implementations of this embodiment, the first sub-model in step 4061 may specifically process the first fused image and the second fused image by: respectively extracting the characteristics of the first fusion image and the second fusion image to obtain a first characteristic image and a second characteristic image; respectively determining matrixes corresponding to the first characteristic image and the second characteristic image to obtain a first matrix and a second matrix; determining a similarity matrix according to the first matrix and the second matrix; determining the color of each pixel of the single-channel gray image according to the similarity matrix and at least one reference image; and determining a rough color image according to the color of each pixel of the single-channel gray image.
In this implementation manner, the first sub-model may include a feature extraction module, configured to extract features of the first fused image and the second fused image to obtain a first feature image and a second feature image. The dimension of the first feature image may be (h11, w11, c) and the dimension of the second feature image may be (h22, w22, c).
Then, the execution subject may determine a matrix corresponding to the first feature image and a matrix corresponding to the second feature image, respectively, resulting in a first matrix and a second matrix. In particular, the executive agent may flatten the first feature map, resulting in a first matrix of dimensions ((h11 × w11), c). The second signature was flattened, resulting in a second matrix with dimensions ((h22 × w22), c).
The execution principal may then determine a similarity matrix from the first matrix and the second matrix. Here, the execution body may transpose the second matrix, resulting in a transposed matrix of dimension (c, (h22 × w 22)). Then, the transposed matrix is multiplied by the first matrix, resulting in a similarity matrix of dimensions ((h11 × w11), (h22 × w 22)). Each element in the similarity matrix is used to represent the similarity between the pixel in each instance in the black-and-white matrix and each instance in the reference image.
The execution subject may further determine a color of each pixel in the single-channel grayscale image according to the similarity matrix and each reference image. Specifically, the execution subject may perform color sampling from each reference image according to the similarity matrix, so that the color of each pixel in the single-channel grayscale image may be determined. It will be appreciated that if an instance in the single-channel grayscale image has a greater similarity to an instance in the reference image, then the executing subject may sample more colors from that instance. That is, the color of the instances in the single-channel grayscale image are similar to the similar instances in the reference image.
In some optional implementation manners of this embodiment, the step 4062 may be specifically implemented by the following steps: and denoising the rough color image, and determining a color image corresponding to the single-channel gray image.
In this implementation, the execution subject may denoise the coarse color image by using the second sub-model, and determine a color image corresponding to the single-channel grayscale image. Thus, the finally obtained color image does not contain noise, and the image quality is better.
The method for processing the image provided by the above embodiment of the present disclosure may color the instances in the single-channel grayscale image by using the instances in the reference image, so that the coloring effect is better.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for processing an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing an image of the present embodiment includes: a first acquisition unit 501, a second acquisition unit 502, an instance division unit 503, and an image coloring unit 504.
A first acquisition unit 501 configured to acquire a single-channel grayscale image.
A second acquisition unit 502 configured to acquire at least one reference image of a single-channel grayscale image.
An example segmentation unit 503 configured to perform example segmentation on the single-channel grayscale image and the at least one reference image, respectively, and determine a first example mask image set corresponding to the single-channel grayscale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result.
An image coloring unit 504 configured to determine a color image corresponding to the single-channel grayscale image based on the single-channel grayscale image, the at least one reference image, the first instance mask image set, and the second instance mask image set.
In some optional implementations of this embodiment, the second obtaining unit 502 may be further configured to: carrying out image retrieval in a preset color image library by using the single-channel gray image, and determining the similarity between the single-channel gray image and each color image in the color image library; and determining at least one reference image according to the similarity.
In some optional implementations of this embodiment, the instance splitting unit 503 may be further configured to: determining a first example mask image corresponding to each example according to the example segmentation result of the single-channel gray-scale image to obtain a first example mask image set; and determining a second example mask image corresponding to each example according to the example segmentation result of each reference image to obtain a second example mask image set.
In some optional implementations of the present embodiment, the image coloring unit 504 may be further configured to: determining a first fusion image according to the single-channel gray image and the first example mask image set; determining a second fused image according to the at least one reference image and the second example mask image set; and determining a color image corresponding to the single-channel gray image according to the first fusion image, the second fusion image and the pre-trained coloring model.
In some optional implementations of the present embodiment, the image coloring unit 504 may be further configured to: multiplying the single-channel gray level image by each first example mask image according to pixels respectively to determine each first processing image; and fusing the first processing images to obtain a first fused image.
In some optional implementations of the present embodiment, the image coloring unit 504 may be further configured to: for each reference image, multiplying the reference image by each pixel with the corresponding second example mask image to determine second processed images; and fusing the second processed images to obtain a second fused image.
In some optional implementations of this embodiment, the coloring model includes a first sub-model and a second sub-model. The image coloring unit 504 may be further configured to: determining a rough color image corresponding to the white image according to the first fusion image, the second fusion image and the first sub-model; and determining a color image corresponding to the single-channel gray image according to the rough color image and the second sub-model.
In some optional implementations of the present embodiment, the image coloring unit 504 may be further configured to: respectively extracting the characteristics of the first fusion image and the second fusion image to obtain a first characteristic image and a second characteristic image; respectively determining matrixes corresponding to the first characteristic image and the second characteristic image to obtain a first matrix and a second matrix; determining a similarity matrix according to the first matrix and the second matrix; determining the color of each pixel of the single-channel gray image according to the similarity matrix and at least one reference image; and determining a rough color image according to the color of each pixel of the single-channel gray image.
In some optional implementations of the present embodiment, the image coloring unit 504 may be further configured to: and denoising the rough color image, and determining a color image corresponding to the single-channel gray image.
It should be understood that the units 501 to 504 recited in the apparatus 500 for processing an image correspond to respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for processing an image are equally applicable to the apparatus 500 and the units included therein and will not be described in detail here.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 600 that performs a method for processing an image according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a processor 601 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a memory 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The processor 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An I/O interface (input/output interface) 605 is also connected to the bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a memory 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. The program code described above may be packaged as a computer program product. These program code or computer program products may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor 601, causes the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable storage medium may be a machine-readable signal storage medium or a machine-readable storage medium. A machine-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Claims (21)
1. A method for processing an image, comprising:
acquiring a single-channel gray image;
acquiring at least one reference image of the single-channel gray image;
respectively carrying out example segmentation on the single-channel gray-scale image and the at least one reference image, and determining a first example mask image set corresponding to the single-channel gray-scale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result;
determining a color image corresponding to the single-channel gray image based on the single-channel gray image, the at least one reference image, the first example mask image set and the second example mask image set.
2. The method of claim 1, wherein said acquiring at least one reference image of said single-channel grayscale image comprises:
utilizing the single-channel gray image to perform image retrieval in a preset color image library, and determining the similarity between the single-channel gray image and each color image in the color image library;
and determining the at least one reference image according to the similarity.
3. The method of claim 1, wherein the determining a first set of example mask images corresponding to the single-channel grayscale image and a second set of example mask images corresponding to the at least one reference image according to example segmentation results comprises:
determining a first example mask image corresponding to each example according to the example segmentation result of the single-channel gray-scale image to obtain a first example mask image set;
and determining a second example mask image corresponding to each example according to the example segmentation result of each reference image to obtain a second example mask image set.
4. The method of claim 1, wherein the determining a color image corresponding to the single-channel grayscale image based on the single-channel grayscale image, the at least one reference image, the first set of example mask images, and the second set of example mask images comprises:
determining a first fusion image according to the single-channel gray-scale image and the first example mask image set;
determining a second fused image according to the at least one reference image and the second example mask image set;
and determining a color image corresponding to the single-channel gray image according to the first fusion image, the second fusion image and a pre-trained coloring model.
5. The method of claim 4, wherein the determining a first fused image from the single-channel grayscale image and the first instance mask image set comprises:
multiplying the single-channel gray image by each first example mask image according to pixels to determine each first processing image;
and fusing the first processing images to obtain the first fused image.
6. The method of claim 4, wherein said determining a second fused image from the at least one reference image and the second instance mask image set comprises:
for each reference image, multiplying the reference image by each pixel with the corresponding second example mask image to determine second processed images;
and fusing the second processed images to obtain the second fused image.
7. The method of claim 4, wherein the coloring model comprises a first sub-model and a second sub-model; and
the determining the color image corresponding to the single-channel gray image according to the first fusion image, the second fusion image and the pre-trained coloring model comprises:
determining a rough color image corresponding to the white image according to the first fused image, the second fused image and the first sub-model;
and determining a color image corresponding to the single-channel gray image according to the rough color image and the second sub-model.
8. The method of claim 7, wherein said determining the coarse color image corresponding to the white image from the first fused image, the second fused image, and the first sub-model comprises:
respectively extracting the features of the first fusion image and the second fusion image to obtain a first feature image and a second feature image;
respectively determining matrixes corresponding to the first characteristic image and the second characteristic image to obtain a first matrix and a second matrix;
determining a similarity matrix according to the first matrix and the second matrix;
determining the color of each pixel of the single-channel gray image according to the similarity matrix and the at least one reference image;
and determining the rough color image according to the color of each pixel of the single-channel gray image.
9. The method of claim 7, wherein the determining the color image corresponding to the single-channel grayscale image according to the coarse color image and the second sub-model comprises:
and denoising the rough color image, and determining a color image corresponding to the single-channel gray image.
10. An apparatus for processing an image, comprising:
a first acquisition unit configured to acquire a single-channel grayscale image;
a second acquisition unit configured to acquire at least one reference image of the single-channel grayscale image;
an example segmentation unit configured to perform example segmentation on the single-channel grayscale image and the at least one reference image, and determine a first example mask image set corresponding to the single-channel grayscale image and a second example mask image set corresponding to the at least one reference image according to an example segmentation result;
an image coloring unit configured to determine a color image corresponding to the single-channel grayscale image based on the single-channel grayscale image, the at least one reference image, the first instance mask image set, and the second instance mask image set.
11. The apparatus of claim 10, wherein the second obtaining unit is further configured to:
utilizing the single-channel gray image to perform image retrieval in a preset color image library, and determining the similarity between the single-channel gray image and each color image in the color image library;
and determining the at least one reference image according to the similarity.
12. The apparatus of claim 10, wherein the instance partitioning unit is further configured to:
determining a first example mask image corresponding to each example according to the example segmentation result of the single-channel gray-scale image to obtain a first example mask image set;
and determining a second example mask image corresponding to each example according to the example segmentation result of each reference image to obtain a second example mask image set.
13. The apparatus of claim 10, wherein the image colorization unit is further configured to:
determining a first fusion image according to the single-channel gray-scale image and the first example mask image set;
determining a second fused image according to the at least one reference image and the second example mask image set;
and determining a color image corresponding to the single-channel gray image according to the first fusion image, the second fusion image and a pre-trained coloring model.
14. The apparatus of claim 13, wherein the image colorization unit is further configured to:
multiplying the single-channel gray image by each first example mask image according to pixels to determine each first processing image;
and fusing the first processing images to obtain the first fused image.
15. The apparatus of claim 13, wherein the image colorization unit is further configured to:
for each reference image, multiplying the reference image by each pixel with the corresponding second example mask image to determine second processed images;
and fusing the second processed images to obtain the second fused image.
16. The apparatus of claim 13, wherein the coloring model comprises a first sub-model and a second sub-model; and
the image coloring unit is further configured to:
determining a rough color image corresponding to the white image according to the first fused image, the second fused image and the first sub-model;
and determining a color image corresponding to the single-channel gray image according to the rough color image and the second sub-model.
17. The apparatus of claim 16, wherein the image colorization unit is further configured to:
respectively extracting the features of the first fusion image and the second fusion image to obtain a first feature image and a second feature image;
respectively determining matrixes corresponding to the first characteristic image and the second characteristic image to obtain a first matrix and a second matrix;
determining a similarity matrix according to the first matrix and the second matrix;
determining the color of each pixel of the single-channel gray image according to the similarity matrix and the at least one reference image;
and determining the rough color image according to the color of each pixel of the single-channel gray image.
18. The apparatus of claim 16, wherein the image colorization unit is further configured to:
and denoising the rough color image, and determining a color image corresponding to the single-channel gray image.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111150932.0A CN113888560A (en) | 2021-09-29 | 2021-09-29 | Method, apparatus, device and storage medium for processing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111150932.0A CN113888560A (en) | 2021-09-29 | 2021-09-29 | Method, apparatus, device and storage medium for processing image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113888560A true CN113888560A (en) | 2022-01-04 |
Family
ID=79007848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111150932.0A Pending CN113888560A (en) | 2021-09-29 | 2021-09-29 | Method, apparatus, device and storage medium for processing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113888560A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511811A (en) * | 2022-01-28 | 2022-05-17 | 北京百度网讯科技有限公司 | Video processing method, video processing device, electronic equipment and medium |
CN115131447A (en) * | 2022-01-14 | 2022-09-30 | 长城汽车股份有限公司 | Image coloring method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1473924A2 (en) * | 2003-04-30 | 2004-11-03 | Canon Kabushiki Kaisha | Image processing apparatus and method therefor |
CN109147003A (en) * | 2018-08-01 | 2019-01-04 | 北京东方畅享科技有限公司 | Method, equipment and the storage medium painted to line manuscript base picture |
CN110349165A (en) * | 2018-04-03 | 2019-10-18 | 北京京东尚科信息技术有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
CN112884866A (en) * | 2021-01-08 | 2021-06-01 | 北京奇艺世纪科技有限公司 | Coloring method, device, equipment and storage medium for black and white video |
US20210201071A1 (en) * | 2018-06-26 | 2021-07-01 | Microsoft Technology Licensing, Llc | Image colorization based on reference information |
-
2021
- 2021-09-29 CN CN202111150932.0A patent/CN113888560A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1473924A2 (en) * | 2003-04-30 | 2004-11-03 | Canon Kabushiki Kaisha | Image processing apparatus and method therefor |
CN110349165A (en) * | 2018-04-03 | 2019-10-18 | 北京京东尚科信息技术有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
US20210201071A1 (en) * | 2018-06-26 | 2021-07-01 | Microsoft Technology Licensing, Llc | Image colorization based on reference information |
CN109147003A (en) * | 2018-08-01 | 2019-01-04 | 北京东方畅享科技有限公司 | Method, equipment and the storage medium painted to line manuscript base picture |
CN112884866A (en) * | 2021-01-08 | 2021-06-01 | 北京奇艺世纪科技有限公司 | Coloring method, device, equipment and storage medium for black and white video |
Non-Patent Citations (1)
Title |
---|
孔德慧;肖小芳;徐振华;郭荆玮;: "基于像素相关性的灰度图像上色算法", 北京工业大学学报, no. 05, 15 May 2009 (2009-05-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131447A (en) * | 2022-01-14 | 2022-09-30 | 长城汽车股份有限公司 | Image coloring method, device, equipment and storage medium |
CN114511811A (en) * | 2022-01-28 | 2022-05-17 | 北京百度网讯科技有限公司 | Video processing method, video processing device, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112949767B (en) | Sample image increment, image detection model training and image detection method | |
CN114550177A (en) | Image processing method, text recognition method and text recognition device | |
CN113436100B (en) | Method, apparatus, device, medium, and article for repairing video | |
EP3876197A2 (en) | Portrait extracting method and apparatus, electronic device and storage medium | |
CN113888560A (en) | Method, apparatus, device and storage medium for processing image | |
CN114792355B (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN112651451B (en) | Image recognition method, device, electronic equipment and storage medium | |
CN113378855A (en) | Method for processing multitask, related device and computer program product | |
CN113923474A (en) | Video frame processing method and device, electronic equipment and storage medium | |
US20230005171A1 (en) | Visual positioning method, related apparatus and computer program product | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
US20240303774A1 (en) | Method of processing image, electronic device and storage medium | |
CN113361535A (en) | Image segmentation model training method, image segmentation method and related device | |
CN113627536A (en) | Model training method, video classification method, device, equipment and storage medium | |
CN113870399A (en) | Expression driving method and device, electronic equipment and storage medium | |
CN113989152A (en) | Image enhancement method, device, equipment and storage medium | |
CN114037630A (en) | Model training and image defogging method, device, equipment and storage medium | |
CN113326766A (en) | Training method and device of text detection model and text detection method and device | |
CN113177483A (en) | Video object segmentation method, device, equipment and storage medium | |
CN114724144B (en) | Text recognition method, training device, training equipment and training medium for model | |
CN113139463B (en) | Method, apparatus, device, medium and program product for training a model | |
CN115601729A (en) | Vehicle model identification method, device, equipment and storage medium | |
CN114882313A (en) | Method and device for generating image annotation information, electronic equipment and storage medium | |
CN113591718A (en) | Target object identification method and device, electronic equipment and storage medium | |
CN113887435A (en) | Face image processing method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |