CN108921916B - Method, device and equipment for coloring multi-target area in picture and storage medium - Google Patents
Method, device and equipment for coloring multi-target area in picture and storage medium Download PDFInfo
- Publication number
- CN108921916B CN108921916B CN201810717335.3A CN201810717335A CN108921916B CN 108921916 B CN108921916 B CN 108921916B CN 201810717335 A CN201810717335 A CN 201810717335A CN 108921916 B CN108921916 B CN 108921916B
- Authority
- CN
- China
- Prior art keywords
- picture
- colored
- coloring
- target
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a device, equipment and a storage medium for coloring multi-target areas in a picture, wherein the method comprises the following steps: carrying out graying processing on a picture to be colored to obtain a grayscale image; selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored; coloring the gray level images respectively through the selected multiple color standard images; identifying and segmenting a plurality of colored target areas; and fusing the divided target areas to the corresponding positions of the picture to be colored. According to the method and the device, a plurality of target areas in the picture to be colored are colored by utilizing a plurality of existing color standard pictures with coordinated colors, the color information of the plurality of target areas in the picture to be colored is changed, a large amount of time is saved, the colors of the pictures are not too monotonous, the coloring diversity and flexibility are realized, and different design requirements are met.
Description
Technical Field
The invention relates to the field of image processing, in particular to a coloring method, a coloring device, coloring equipment and a storage medium for multi-target areas in a picture.
Background
In designing posters, flyers, and the like, it is often necessary to design colors of a plurality of objects on one screen, and designing a color of one object is not a mere replacement of a pure color, but it is necessary to consider whether colors of respective parts of the object are coordinated, and it takes a lot of time for a designer to manually design colors of respective parts of the object.
At present, there is a technology for coloring a whole gray image, for example, by giving a color image whose color information is defined by a global histogram and saturation, encoding each pixel in a quantized ab space by adjusting the color to a quarter resolution by a bilinear difference using a Lab color model and averaging spatially to calculate the global histogram of the color image, calculating the saturation by converting the color image into an HSV color space, then averaging the saturation spatially, and finally merging the calculated global histogram and the saturation into a coloring network, which applies the color information of the color image to the whole gray image. Such a method cannot freely determine the color of an object in the picture, and is not flexible enough for designing the color of the picture.
Therefore, how to design different colors for multiple targets of a picture is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a storage medium for coloring multiple target regions in a picture, which can utilize multiple existing color standard pictures with coordinated colors to design different colors for multiple targets of the picture. The specific scheme is as follows:
a coloring method for multi-target areas in a picture comprises the following steps:
carrying out graying processing on a picture to be colored to obtain a grayscale image;
selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored;
coloring the gray level images respectively through the selected multiple color standard images;
identifying and segmenting a plurality of colored target areas;
and fusing the divided target areas to the corresponding positions of the picture to be colored.
Preferably, in the method for coloring a multi-target region in a picture provided in an embodiment of the present invention, the step of coloring the grayscale map by using the selected multiple color standard maps includes:
training a convolutional neural network for coloring and a target network for acquiring color information;
inputting the picture to be colored and the gray-scale image into the convolutional neural network for training;
inputting a plurality of selected color standard diagrams into the target network, acquiring color information of the color standard diagrams and inputting the color information into the convolutional neural network for training;
a loss function is calculated until the convolutional neural network outputs a plurality of color maps with the color information.
Preferably, in the method for coloring a multi-target region in a picture provided by the embodiment of the present invention, the convolutional neural network includes ten convolutional layers; the first three of the convolutional layers are used for downsampling; the last three convolutional layers are used for up-sampling;
the target network comprises four convolutional layers; the output of the target network is added to a fourth convolutional layer of the convolutional neural network.
Preferably, in the method for coloring a multi-target region in a picture provided in an embodiment of the present invention, identifying and segmenting the colored target regions specifically includes:
and identifying and segmenting a plurality of colored target areas through a Mask R-CNN network.
Preferably, in the method for coloring multiple target regions in a picture provided in the embodiment of the present invention, identifying and segmenting the colored multiple target regions through a Mask R-CNN network specifically includes:
extracting a characteristic diagram from the picture to be colored by using a ResNet-FPN network;
generating a candidate frame on the feature map by using an RPN (resilient packet network), and marking the specific position of the target area through the candidate frame;
inputting the candidate frame into RoIAlign extraction features, and acquiring a mask corresponding to each target area;
outputting the probability by using a softmax function to obtain a plurality of example classes and 1 background class;
performing linear regression on the mask and the candidate bounding box;
inputting the colored gray level graph into the Mask R-CNN network for training, and segmenting a plurality of colored target areas.
Preferably, in the method for coloring a multi-target region in a picture provided in an embodiment of the present invention, an RPN network is used to generate a candidate frame on the feature map, and a specific position of the target region is marked by the candidate frame, which specifically includes:
sliding a nucleus over the feature map;
mapping the center of the kernel back to the picture to be colored, generating candidate frames with various set sizes at the center, and judging whether the candidate frames comprise a target area;
and if so, finely adjusting the candidate frame to mark the specific position of the target area by the candidate frame.
Preferably, in the method for coloring a multi-target region in a picture provided in the embodiment of the present invention, fusing the divided multiple target regions to corresponding positions of the picture to be colored includes:
fusing the divided target areas to corresponding positions of the picture to be colored through a smoothing filter and smoothing edge transition.
The embodiment of the invention also provides a device for coloring the multi-target area in the picture, which comprises the following steps:
the graying processing module is used for performing graying processing on the picture to be colored to obtain a grayscale image;
the standard diagram selecting module is used for selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored;
the grey-scale image coloring module is used for coloring the grey-scale image through the selected multiple color standard images;
the target area division module is used for identifying and dividing a plurality of colored target areas;
and the target area fusion module is used for fusing the plurality of divided target areas to the corresponding positions of the picture to be colored.
The embodiment of the invention also provides a device for coloring the multi-target areas in the picture, which comprises a processor and a memory, wherein the processor realizes the method for coloring the multi-target areas in the picture provided by the embodiment of the invention when executing the computer program stored in the memory.
The embodiment of the invention also provides a computer-readable storage medium for storing a computer program, wherein the computer program is executed by a processor to implement the coloring method for the multi-target area in the picture provided by the embodiment of the invention.
The invention provides a method, a device, equipment and a storage medium for coloring multi-target areas in a picture, wherein the method comprises the following steps: carrying out graying processing on a picture to be colored to obtain a grayscale image; selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored; coloring the gray level images respectively through the selected multiple color standard images; identifying and segmenting a plurality of colored target areas; and fusing the divided target areas to the corresponding positions of the picture to be colored. The invention utilizes a plurality of existing color standard pictures with coordinated colors to color a plurality of target areas in the picture to be colored, changes the color information of the plurality of target areas in the picture to be colored, saves a large amount of time, can ensure that the colors of the pictures are not too monotonous, realizes the diversity and flexibility of coloring, and meets different design requirements.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for coloring a multi-target area in a picture according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a coloring device for multiple target areas in a picture according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a coloring method for multi-target areas in a picture, which comprises the following steps as shown in figure 1:
s101, carrying out graying processing on a picture to be colored to obtain a grayscale image;
s102, selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored;
s103, coloring the gray level images through the selected multiple color standard images respectively;
s104, identifying and dividing a plurality of colored target areas;
and S105, fusing the plurality of divided target areas to corresponding positions of the picture to be colored.
In the method for coloring a multi-target area in a picture provided by the embodiment of the invention, firstly, graying processing (OpenCV) is carried out on the picture to be colored to obtain a grayscale image; then selecting a plurality of color standard graphs for designing different colors of a plurality of target areas of the picture to be colored; fusing the color standard map into the gray scale map; then, identifying and dividing the colored target area; and finally, fusing the segmented target area into the original color image. A plurality of target areas in the picture to be colored are colored by utilizing a plurality of existing color standard pictures with coordinated colors, the color information of the target areas in the picture to be colored is changed, a large amount of time is saved, the colors of the pictures are not too monotonous, the coloring diversity and flexibility are realized, and different design requirements are met.
Further, in a specific implementation, in the method for coloring a multi-target region in a picture provided in the embodiment of the present invention, the step S103 is to color the grayscale images through the selected multiple color standard diagrams, and may specifically include the following steps:
step one, training a convolutional neural network M for coloring and a target network U for acquiring color information; the convolutional neural network M comprises ten convolutional layers, wherein the first three convolutional layers are used for down-sampling, and the last three convolutional layers are used for up-sampling; the target network U comprises four convolutional layers, and the output of the target network U is added into a fourth convolutional layer of the convolutional neural network M; the two trained networks can coordinate the colors of all parts in the picture to be colored and do not change the content information of the picture;
inputting the picture to be colored and the gray level image into a convolutional neural network M for training;
inputting the selected multiple color standard charts into a target network U, acquiring color information of the color standard charts, and inputting the color information into a convolutional neural network M for training;
and step four, calculating a loss function L until the convolutional neural network outputs a plurality of color images (colored gray level images) with color information.
Specifically, the network M is trained to minimize an objective function, where a loss function L represents the distance of the output of the network from the true value, θ is updated to minimize the loss function, X ∈ R H×W×1 Representing a grey scale map, Y ∈ R H×W×2 Representing the picture to be coloured (i.e. the colour drawing), C ∈ R H×W×2 Representing a selected color standard map, comprising X, C and Y for each set D, E representing the mean, H and W representing the height and width of the picture, respectivelyR represents the real number domain, these mappings are learned by the convolutional neural network M:
similar to the above equation, E denotes the mean, D denotes the distribution of X and Y, and minimizing the objective function of the network U is used to obtain the color:
the above two objective functions are represented by Huber (smooth-l) 1 ) Loss is defined, the picture blurring situation can be solved, and end-to-end learning is provided, namely, the general expressions of the two loss functions are as follows:
where X and y are variables, X ═ M (X, C; θ) or X ═ M (X, C; θ) U ) Substituting Y into the above equation, the objective functions of the networks M and U can be obtained.
Further, in a specific implementation, in the method for coloring a multi-target region in a picture provided by the embodiment of the present invention, the step S104 of identifying and segmenting the colored multiple target regions may specifically include: the colored target areas are identified and segmented by Mask R-CNN (Regions with CNN features).
It should be noted that there are many dividing ways for the picture, and the invention adopts Mask R-CNN network, which can be divided into three sub-networks, including feature extraction network, candidate frame generation network and classification network, and these three sub-networks share the feature map, so that the colored target area can be accurately identified and classified. Specifically, the network is divided into three losses, namely class, box and mask, wherein class represents the classification of the object, box represents the addition of a frame to the object, and mask represents one object by using the same pixel.
Further, in a specific implementation, in the method for coloring a multi-target region in a picture provided in the embodiment of the present invention, identifying and segmenting a plurality of colored target regions through a Mask R-CNN network may specifically include the following steps:
firstly, extracting a feature map (feature map) from a picture to be colored by utilizing a ResNet-FPN network;
secondly, generating a candidate frame (Region of Interest, RoI) on the feature map by using an RPN (Region provider Network), and marking the specific position of the target Region through the candidate frame;
thirdly, inputting the candidate frame into RoIAlign extraction features to obtain a mask (mask) corresponding to each target area;
fourthly, outputting the probability by using a softmax function to obtain a plurality of example classes and 1 background class;
fifthly, performing linear regression on the mask and the candidate frame to enable the mask and the candidate frame to be more fit with the target;
and sixthly, inputting the colored gray-scale image into a Mask R-CNN network for training, and segmenting a plurality of colored target areas.
Further, in a specific implementation, in the method for coloring a multi-target region in a picture provided by the embodiment of the present invention, in the second step, a candidate frame is generated on the feature map by using an RPN network, and a specific position of the target region is marked by the candidate frame, which may specifically include the following steps: sliding a 3 × 3 kernel (kernel) over the feature map; the center of the kernel is mapped back into the picture to be colored, generating multiple set sizes (e.g., 9 sizes, 3 ratios: 128) at the center 2 ,256 2 ,512 2 (ii) a 3 kinds of candidate frames with length-width ratios of 1, 0.5 and 2) and judging whether the candidate frames comprise the target area; and if so, finely adjusting the candidate frame to mark the specific position of the target area by the candidate frame.
Further, in a specific implementation, in the method for coloring a multi-target region in an image according to an embodiment of the present invention, the step S105 of fusing the divided multiple target regions to corresponding positions of the image to be colored may specifically include: fusing the divided target areas to corresponding positions of the picture to be colored through a smoothing filter and smoothing the edge transition.
It should be noted that, by using the smoothing filter, the segmented target region (foreground image) can be smoothly fused into the picture to be colored (background image), so that the foreground image and the background image are seamlessly joined, specifically including the following steps:
an edge-directed smoothing filter a is proposed (u,v) Filtering the fused image h, wherein (b), (u) represents the filtered image as follows:
wherein, T s Represents the set of spatial positions of the origin at the center of the smoothing filter, u ═ u x ,u y ] T ,v=[v x ,v y ] T Is a spatial location represented in vector form.
Filter a (u,v) Is represented as follows:
where γ is a normalized parameter satisfying the condition Σ a ═ 0, σ is a diffusion parameter, G (u) and θ (u) are the edge intensity and edge direction at the spatial position u, respectively, E (G (u)) is a monotonically increasing function satisfying E (0) ≧ 1, where E (G (u) ═ β (G (u)) +1, β > 0 is an edge intensity parameter, and the matrix G is a matrix G (u) ≧ 0 (u) So that the weight value in the edge direction is greater than the weight value in the vertical edge direction.
When the coefficients of the edge-oriented smoothing filter are obtained, the edge strength and the edge direction at a certain spatial position of the image are firstly calculated, and the two parameters can pass through the Sobel operator. h' type x ,h` y Is the result of the Sobel operator processing the image h in the horizontal and vertical directions. The edge direction θ and the edge strength g are expressed as follows:
the smoothing filter coefficient can be obtained by substituting the above formula into the above formula.
The invention fuses a plurality of divided target areas and the picture to be colored and applies a smoothing filter to more harmoniously connect the divided target areas and the picture to be colored, the smoothing filter can be divided into two parts, one part is used for searching the corresponding position of the colored target area in the picture to be colored, and the other part enables the two parts to be in smooth transition.
Based on the same inventive concept, the embodiment of the present invention further provides a device for coloring multiple target regions in a picture, and the principle of the device for coloring multiple target regions in a picture to solve the problem is similar to the above method for coloring multiple target regions in a picture, so the implementation of the device for coloring multiple target regions in a picture can refer to the implementation of the method for coloring multiple target regions in a picture, and repeated details are omitted.
In specific implementation, the device for coloring a multi-target area in a picture provided by the embodiment of the present invention, as shown in fig. 2, specifically includes:
the graying processing module 11 is used for performing graying processing on the picture to be colored to obtain a grayscale image;
the standard diagram selecting module 12 is used for selecting a plurality of color standard diagrams for designing different colors of the picture to be colored;
a gray scale image coloring module 13, configured to color the gray scale image through the selected multiple color standard images;
a target area division module 14, configured to identify and divide a plurality of colored target areas;
and the target area fusion module 15 is configured to fuse the plurality of divided target areas to corresponding positions of the picture to be colored.
In the device for coloring multiple target regions in a picture provided by the embodiment of the invention, multiple target regions in the picture to be colored can be colored by utilizing multiple existing color standard pictures with coordinated colors through the interaction of the four modules, so that a large amount of time is saved, the colors of the pictures are not too monotonous, the diversity and flexibility of coloring are realized, and different design requirements are met.
For more specific working processes of the modules, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Correspondingly, the embodiment of the invention also discloses coloring equipment for the multi-target area in the picture, which comprises a processor and a memory; when the processor executes the computer program stored in the memory, the coloring method for the multi-target area in the picture disclosed by the embodiment is realized.
For more specific processes of the above method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Further, the present invention also discloses a computer readable storage medium for storing a computer program; the computer program is executed by a processor to realize the coloring method of the multi-target area in the picture disclosed in the foregoing.
For more specific processes of the above method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the equipment and the storage medium disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The embodiment of the invention provides a method, a device, equipment and a storage medium for coloring multi-target areas in a picture, wherein the method comprises the following steps: carrying out graying processing on a picture to be colored to obtain a grayscale image; selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored; coloring the gray level images respectively through the selected multiple color standard images; identifying and segmenting a plurality of colored target areas; and fusing the divided target areas to the corresponding positions of the picture to be colored. The invention utilizes a plurality of existing color standard pictures with coordinated colors to color a plurality of target areas in the picture to be colored, changes the color information of the plurality of target areas in the picture to be colored, saves a large amount of time, can ensure that the colors of the pictures are not too monotonous, realizes the diversity and flexibility of coloring, and meets different design requirements.
Finally, it is further noted that, herein, relational terms are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method, the apparatus, the device and the storage medium for coloring multi-target regions in a picture provided by the present invention are described in detail above, a specific example is applied in the present document to illustrate the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (9)
1. A coloring method for multi-target areas in a picture is characterized by comprising the following steps:
carrying out graying processing on a picture to be colored to obtain a grayscale image;
selecting a plurality of color standard diagrams for carrying out different color designs on the picture to be colored;
coloring the gray level images respectively through the selected multiple color standard images;
identifying and segmenting a plurality of colored target areas;
fusing the divided target areas to corresponding positions of the picture to be colored;
the coloring of the gray level images through the selected multiple color standard images specifically comprises:
training a convolutional neural network for coloring and a target network for acquiring color information;
inputting the picture to be colored and the gray-scale image into the convolutional neural network for training;
inputting a plurality of selected color standard diagrams into the target network, acquiring color information of the color standard diagrams and inputting the color information into the convolutional neural network for training;
a loss function is calculated until the convolutional neural network outputs a plurality of color maps with the color information.
2. The method for coloring multi-target regions in the picture according to claim 1, wherein the convolutional neural network comprises ten convolutional layers; the first three of the convolutional layers are used for downsampling; the last three convolutional layers are used for up-sampling;
the target network comprises four convolutional layers; the output of the target network is added to a fourth convolutional layer of the convolutional neural network.
3. The method for coloring multiple target regions in a picture according to claim 1, wherein identifying and segmenting the colored multiple target regions specifically comprises:
and identifying and segmenting a plurality of colored target areas through a Mask R-CNN network.
4. The method for coloring multiple target regions in a picture according to claim 3, wherein the method for identifying and segmenting the colored multiple target regions through a Mask R-CNN network specifically comprises:
extracting a feature map from the picture to be colored by using a ResNet-FPN network;
generating a candidate frame on the feature map by using an RPN (resilient packet network), and marking the specific position of the target area through the candidate frame;
inputting the candidate frame into RoIAlign extraction features, and acquiring a mask corresponding to each target area;
outputting the probability by using a softmax function to obtain a plurality of example classes and 1 background class;
performing linear regression on the mask and the candidate bounding box;
inputting the colored gray level image into the Mask R-CNN network for training, and segmenting a plurality of colored target areas.
5. The method for coloring multi-target regions in a picture according to claim 4, wherein an RPN network is used to generate a candidate frame on the feature map, and the specific location of the target region is marked by the candidate frame, which specifically includes:
sliding a nucleus over the feature map;
mapping the center of the kernel back to the picture to be colored, generating candidate frames with various set sizes at the center, and judging whether the candidate frames comprise a target area;
and if so, finely adjusting the candidate frame to mark the specific position of the target area by the candidate frame.
6. The method for coloring multi-target regions in a picture according to claim 1, wherein fusing the divided target regions to corresponding positions of the picture to be colored specifically comprises:
fusing the divided target areas to corresponding positions of the picture to be colored through a smoothing filter and smoothing edge transition.
7. A multi-target area's device of coloring in picture, its characterized in that includes:
the graying processing module is used for performing graying processing on the picture to be colored to obtain a grayscale image;
the standard diagram selecting module is used for selecting a plurality of color standard diagrams for designing different colors of the picture to be colored;
the grey-scale image coloring module is used for coloring the grey-scale image through the selected multiple color standard images;
the target area division module is used for identifying and dividing a plurality of colored target areas;
the target area fusion module is used for fusing the plurality of divided target areas to the corresponding positions of the picture to be colored;
the grey-scale map coloring module is specifically used for training a convolutional neural network for coloring and a target network for acquiring color information;
inputting the picture to be colored and the gray-scale image into the convolutional neural network for training;
inputting a plurality of selected color standard diagrams into the target network, acquiring color information of the color standard diagrams and inputting the color information into the convolutional neural network for training;
a loss function is calculated until the convolutional neural network outputs a plurality of color maps with the color information.
8. A coloring device for multi-target areas in pictures, which is characterized by comprising a processor and a memory, wherein the processor executes a computer program stored in the memory to realize the coloring method for multi-target areas in pictures according to any one of claims 1 to 6.
9. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method for coloring a multi-target region in a picture according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810717335.3A CN108921916B (en) | 2018-07-03 | 2018-07-03 | Method, device and equipment for coloring multi-target area in picture and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810717335.3A CN108921916B (en) | 2018-07-03 | 2018-07-03 | Method, device and equipment for coloring multi-target area in picture and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108921916A CN108921916A (en) | 2018-11-30 |
CN108921916B true CN108921916B (en) | 2022-09-16 |
Family
ID=64423458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810717335.3A Active CN108921916B (en) | 2018-07-03 | 2018-07-03 | Method, device and equipment for coloring multi-target area in picture and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108921916B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635740B (en) * | 2018-12-13 | 2020-07-03 | 深圳美图创新科技有限公司 | Video target detection method and device and image processing equipment |
CN109753931A (en) * | 2019-01-04 | 2019-05-14 | 广州广电卓识智能科技有限公司 | Convolutional neural networks training method, system and facial feature points detection method |
CN109816669A (en) * | 2019-01-30 | 2019-05-28 | 云南电网有限责任公司电力科学研究院 | A kind of improvement Mask R-CNN image instance dividing method identifying power equipments defect |
CN110147778B (en) * | 2019-05-27 | 2022-09-30 | 江西理工大学 | Rare earth ore mining identification method, device, equipment and storage medium |
CN110458921B (en) * | 2019-08-05 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Image processing method, device, terminal and storage medium |
CN110852176A (en) * | 2019-10-17 | 2020-02-28 | 陕西师范大学 | High-resolution three-number SAR image road detection method based on Mask-RCNN |
CN111127483B (en) * | 2019-12-24 | 2023-09-15 | 新方正控股发展有限责任公司 | Color picture processing method, device, equipment, storage medium and system |
CN113570678A (en) * | 2021-01-20 | 2021-10-29 | 腾讯科技(深圳)有限公司 | Image coloring method and device based on artificial intelligence and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101706965A (en) * | 2009-11-03 | 2010-05-12 | 上海大学 | Method for colorizing regional image on basis of Gaussian mixture model |
CN101872473A (en) * | 2010-06-25 | 2010-10-27 | 清华大学 | Multiscale image natural color fusion method and device based on over-segmentation and optimization |
EP3038057A1 (en) * | 2014-12-22 | 2016-06-29 | Thomson Licensing | Methods and systems for color processing of digital images |
CN107330956A (en) * | 2017-07-03 | 2017-11-07 | 广东工业大学 | A kind of unsupervised painting methods of caricature manual draw and device |
CN107862664A (en) * | 2017-11-15 | 2018-03-30 | 广东交通职业技术学院 | A kind of image non-photorealistic rendering method and system |
-
2018
- 2018-07-03 CN CN201810717335.3A patent/CN108921916B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101706965A (en) * | 2009-11-03 | 2010-05-12 | 上海大学 | Method for colorizing regional image on basis of Gaussian mixture model |
CN101872473A (en) * | 2010-06-25 | 2010-10-27 | 清华大学 | Multiscale image natural color fusion method and device based on over-segmentation and optimization |
EP3038057A1 (en) * | 2014-12-22 | 2016-06-29 | Thomson Licensing | Methods and systems for color processing of digital images |
CN107330956A (en) * | 2017-07-03 | 2017-11-07 | 广东工业大学 | A kind of unsupervised painting methods of caricature manual draw and device |
CN107862664A (en) * | 2017-11-15 | 2018-03-30 | 广东交通职业技术学院 | A kind of image non-photorealistic rendering method and system |
Non-Patent Citations (6)
Title |
---|
Mask R-CNN;Kaiming He 等;《2017 IEEE International Conference on Computer Vision》;20171225;第2980-2988页 * |
一种利用回归神经网络黑白图像着色算法研究;余梓唐;《计算机应用研究》;20120430;第29卷(第4期);第1595-1597页 * |
基于卷积神经网络的多波段融合图像彩色化方法;韩泽等;《测试技术学报》;20180630(第03期);第201-206页 * |
基于图像分割和色彩扩展的灰度图像彩色化方法;朱黎博等;《微型电脑应用》;20090520(第05期);第4-6页 * |
基于融合图像特征库的夜视图像彩色化研究;何永强等;《激光与红外》;20121220(第12期);第1393-1397页 * |
基于视觉感知的图像处理方法研究;向遥;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20111215;第33-43页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108921916A (en) | 2018-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921916B (en) | Method, device and equipment for coloring multi-target area in picture and storage medium | |
CN112348815B (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
US10635935B2 (en) | Generating training images for machine learning-based objection recognition systems | |
Sihotang | Implementation of Gray Level Transformation Method for Sharping 2D Images | |
Singh et al. | Comprehensive survey on haze removal techniques | |
EP3819859B1 (en) | Sky filter method for panoramic images and portable terminal | |
CN101329760B (en) | Signal processing device and signal processing method | |
CN103247036B (en) | Many exposure images fusion method and device | |
CN103985133B (en) | Search method and system for optimal splicing lines among images based on graph-cut energy optimization | |
CN108305260B (en) | Method, device and equipment for detecting angular points in image | |
CN107273870A (en) | The pedestrian position detection method of integrating context information under a kind of monitoring scene | |
Chen et al. | Hybrid constraints of pure and mixed pixels for soft-then-hard super-resolution mapping with multiple shifted images | |
CN111179193B (en) | Dermatoscope image enhancement and classification method based on DCNNs and GANs | |
CN116645592B (en) | Crack detection method based on image processing and storage medium | |
CN106373126B (en) | Image significance detection method based on fusion class geodesic curve and boundary comparison | |
Krešo et al. | Robust semantic segmentation with ladder-densenet models | |
CN110675396A (en) | Remote sensing image cloud detection method, device and equipment and computer readable storage medium | |
JP2014016688A (en) | Non-realistic conversion program, device and method using saliency map | |
CN114372931A (en) | Target object blurring method and device, storage medium and electronic equipment | |
JP7300027B2 (en) | Image processing device, image processing method, learning device, learning method, and program | |
CN114332117B (en) | Post-earthquake landform segmentation method based on UNET < 3+ > and full-connection conditional random field fusion | |
CN109635809B (en) | Super-pixel segmentation method for visual degradation image | |
CN104778657B (en) | Two-dimensional image code fusion method and device | |
CN117612138A (en) | Parking space detection method, device, equipment and storage medium | |
CN111667499A (en) | Image segmentation method, device and equipment for traffic signal lamp and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |