CN107025457A - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN107025457A CN107025457A CN201710199165.XA CN201710199165A CN107025457A CN 107025457 A CN107025457 A CN 107025457A CN 201710199165 A CN201710199165 A CN 201710199165A CN 107025457 A CN107025457 A CN 107025457A
- Authority
- CN
- China
- Prior art keywords
- image
- map
- preset
- color
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims abstract description 205
- 239000000463 material Substances 0.000 claims abstract description 101
- 230000000694 effects Effects 0.000 claims abstract description 95
- 238000012545 processing Methods 0.000 claims abstract description 83
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 230000004927 fusion Effects 0.000 claims description 55
- 238000000034 method Methods 0.000 claims description 23
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 22
- 238000010586 diagram Methods 0.000 claims description 15
- 239000003086 colorant Substances 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 230000008676 import Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims 2
- 238000003786 synthesis reaction Methods 0.000 claims 2
- 238000013507 mapping Methods 0.000 claims 1
- 238000007689 inspection Methods 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000282941 Rangifer tarandus Species 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method and device;The embodiment of the present invention is after image processing requests are received, semantic segmentation model corresponding with the element type for needing to replace can be obtained according to the instruction of the request, belong to the probability of the element type according to each pixel in the model prediction image, to obtain probability figure, then, the probability figure is optimized based on condition random field, and merged image with predicted elemental material using the segmentation effect figure obtained after optimization, so as to reach the purpose by a certain element type partial replacement in image for predicted elemental material;The program can reduce flase drop and the probability of missing inspection, greatly improve the accuracy of segmentation, and improve the syncretizing effect of image.
Description
Technical Field
The invention relates to the technical field of computers, in particular to an image processing method and device.
Background
With the popularization of intelligent mobile terminals, shooting and recording at any time and any place become a way of people's life gradually, and meanwhile, image processing, such as beautifying or special effect processing on images, is more and more popular among people.
In special effects processing, element replacement is one of the most common techniques. Taking replacing sky elements as an example, in the prior art, a threshold value determination may be generally performed based on information such as color and position of sky in an image, and then, sky segmentation is performed on the image according to a determination result, and a sky area obtained after segmentation is replaced with other elements, such as fireworks, reindeer, or quadratic space, so that a processed image may achieve a special effect.
In the process of research and practice of the prior art, the inventor of the present invention finds that, because the threshold determination is mainly performed based on information such as color and position when the image is divided into regions according to the prior art, false detection and missed detection are easily caused, which greatly affects the accuracy of segmentation and the fusion effect of the image, such as generation of distortion or insufficient smoothness.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device; the cutting accuracy can be improved, and the fusion effect can be improved.
The embodiment of the invention provides an image processing method, which comprises the following steps:
receiving an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
obtaining a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network;
predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map;
optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
Correspondingly, an embodiment of the present invention further provides an image processing apparatus, including:
a receiving unit configured to receive an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
the acquisition unit is used for acquiring a semantic segmentation model corresponding to the element type, and the semantic segmentation model is formed by training a deep neural network;
the prediction unit is used for predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map;
the optimization unit is used for optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and the fusion unit is used for fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
After receiving an image processing request, the embodiment of the invention can obtain a semantic segmentation model corresponding to an element type to be replaced according to the indication of the request, predict the probability of each pixel in an image belonging to the element type according to the model to obtain an initial probability map, then optimize the initial probability map based on a conditional random field, and fuse the image and a preset element material by using a segmentation effect map obtained after optimization, thereby achieving the purpose of replacing a certain element type part in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of an image processing method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image processing method provided by an embodiment of the invention;
FIG. 2a is another flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2b is a diagram illustrating an example of an interface of an image processing request in the image processing method according to the embodiment of the present invention;
fig. 2c is a diagram illustrating an example of sky segmentation in an image processing method according to an embodiment of the present invention;
FIG. 2d is a process flow diagram of an image processing method according to an embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method and an image processing device, wherein the image processing device can be specifically integrated in equipment such as a server.
For example, referring to fig. 1a, when a user needs to process a certain image, an image processing request indicating information such as the image that needs to be processed and the type of element that needs to be replaced may be transmitted to the server through the terminal. After receiving the image processing request, the server may obtain a semantic segmentation model (which is trained by a deep neural network) corresponding to the element type, and then predict, according to the semantic segmentation model, a probability that each pixel in the image belongs to the element type to obtain a segmentation probability map. Thereafter, the server may further optimize the initial probability map by using a conditional random field or the like to obtain a finer segmentation result (i.e., obtain a segmentation effect map), and then fuse the image with preset element materials according to the segmentation result, for example, a first color portion (e.g., a white portion) in the segmentation effect map may be combined with a replaceable element material by a fusion algorithm, and a second color portion (e.g., a black portion) in the segmentation effect map may be combined with the image, and then combine the two combination results, and provide the combined processed image to the terminal, and so on.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
The first embodiment,
The present embodiment will be described from the viewpoint of an image processing apparatus which can be specifically integrated in a server or the like.
An image processing method comprising: the method comprises the steps of receiving an image processing request, wherein the image processing request indicates an image to be processed and an element type to be replaced, obtaining a semantic segmentation model corresponding to the element type, the semantic segmentation model is formed by training a deep neural network, predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map, optimizing the initial probability map based on a conditional random field to obtain a segmentation effect map, and fusing the image and preset element materials according to the segmentation effect map to obtain a processed image.
As shown in fig. 1b, the specific flow of the image processing method may be as follows:
101. an image processing request is received.
For example, an image processing request sent by a terminal or other network-side device may be specifically received, and so on. The image processing request may indicate information such as an image to be processed and an element type to be replaced.
The element type refers to a category of elements, and an element refers to a basic element that can carry visual information, for example, if the image processing request indicates that the type of the element that needs to be replaced is "sky", it indicates that all sky parts in the image need to be replaced; for another example, if the image processing request indicates that the element type that needs to be replaced is "portrait," this indicates that all portrait portions in the image need to be replaced, and so on.
102. And acquiring a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network.
For example, if the received image processing request indicates that the element type requiring replacement is "sky" in step 101, a semantic segmentation model corresponding to "sky" may be acquired, and if the received image processing request indicates that the element type requiring replacement is "portrait" in step 101, a semantic segmentation model corresponding to "portrait" may be acquired, and so on.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, before the step "acquiring the semantic segmentation model corresponding to the element type", the image processing method may further include:
establishing a semantic segmentation model corresponding to the element type, for example, the semantic segmentation model may specifically be as follows:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
For example, taking the example of establishing a semantic segmentation model corresponding to "sky", a certain number (for example, 8000) of pictures including sky may be collected, and then, according to the pictures, a preset semantic segmentation initial model is adjusted (fine tune) by using a deep neural network, and the finally obtained model is the semantic segmentation model corresponding to "sky".
It should be noted that the preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
103. Predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map; for example, the following may be specifically mentioned:
(1) and importing the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the element type.
For example, if the element type is "sky", then at this time, the image may be imported into a semantic segmentation model corresponding to "sky" to predict the probability that each pixel in the image belongs to "sky".
For another example, if the element type is "portrait", then at this time, the image may be imported into a semantic segmentation model corresponding to the "portrait", so as to predict the probability that each pixel in the image belongs to the "portrait", and so on.
(2) And setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, it may be specifically determined whether the probability is greater than a preset threshold, if so, the color of the corresponding pixel on the preset mask is set as a first color, if not, the color of the corresponding pixel on the preset mask is set as a second color, and after it is determined that the colors of all pixels on the preset mask in the image are completely set, the preset mask after the colors are set is output, so as to obtain an initial probability map.
That is, a mask including a first color and a second color may be obtained, where the first color in the mask indicates that the probability that the corresponding pixel belongs to the element type is relatively high, and the second color indicates that the probability that the corresponding pixel belongs to the element type is relatively low.
For example, if the probability that a certain pixel a belongs to the "sky" is greater than 80%, the color of the pixel a on the preset mask may be set to be a first color, otherwise, if the probability that the pixel a belongs to the "sky" is less than or equal to 80%, the color of the pixel a on the preset mask may be set to be a second color, and so on.
The first color and the second color may also be determined according to the requirements of practical applications, for example, the first color may be set to white, and the second color may be set to black, or the first color may also be set to pink, and the second color may also be set to green, and so on. For convenience of description, in the embodiments of the present invention, the first color is white, and the second color is black.
104. The initial probability map is optimized based on a Conditional Random field (CRF or CRFs, also called Conditional Random Fields) to obtain a segmentation effect map.
For example, the pixels in the initial probability map may be mapped to nodes in the conditional random field, the similarity of edge constraints between the nodes is determined, and the segmentation result of the pixels in the initial probability map is adjusted according to the similarity of the edge constraints, so as to obtain a segmentation effect map.
The conditional random field is a discriminant probability model, and is a kind of random field. Like a markov random field, a conditional random field is a graph model with no direction, nodes (i.e., vertices) in the graph model represent random variables, and connecting lines between the nodes represent the dependency relationships between the random variables. The conditional random field has the capability of expressing long-distance dependency and overlapping characteristics, can better solve the problems of labeling (classification) bias and the like, and can carry out global normalization on all the characteristics to obtain a global optimal solution, so that the initial probability map can be optimized by using the conditional random field to achieve the purpose of optimizing the segmentation result.
It should be noted that, since the segmentation effect map is optimized from the initial probability map, the segmentation effect map is also a mask including the first color and the second color.
105. Fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image; for example, the following may be specifically mentioned:
(1) and acquiring the replaceable element material according to a preset strategy.
The preset policy may be set according to requirements of actual applications, for example, a material selection instruction triggered by a user may be received, and then, corresponding materials are obtained from a material library according to the material selection instruction, and the corresponding materials are used as replaceable element materials.
Optionally, in order to increase the diversity of the element material, the element material may be obtained in a random interception manner, that is, the step "obtaining the replaceable element material according to the preset policy" may also include:
acquiring a candidate image, randomly intercepting the candidate image, and taking the intercepted image as a replaceable element material, and the like.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
(2) And combining the first color part in the segmentation effect graph with the acquired element materials through a fusion algorithm to obtain a first combination graph.
Because the probability that the pixels of the first color part belong to the element type to be replaced is high, at this time, the part can be combined with the acquired element materials through a fusion algorithm, that is, the pixels of the part can be replaced by the acquired element materials.
(3) And combining the second color part in the segmentation effect map with the image through a fusion algorithm to obtain a second combination map.
Since the probability that the pixels of the second color portion belong to the element type to be replaced is low, the portion can be combined with the original image through a fusion algorithm, that is, the pixels of the portion are retained.
Optionally, in order to improve the fusion effect or implement other special effect effects, before the second color portion is combined with the image, the image may be further subjected to certain preprocessing, such as color transformation, contrast adjustment, brightness adjustment, saturation adjustment, and/or adding other special effect masks, and then the second color portion is combined with the preprocessed image by a fusion algorithm to obtain a second combination diagram.
(4) And synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
Therefore, the element that needs to be replaced in the image can be replaced by the element material, for example, the "sky" in the image is replaced by the "space", and the like, and details are not repeated here.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; before the step "fusing the image with the preset element material according to the segmentation effect map to obtain the processed image", the image processing method may further include:
and carrying out Appearance Model (Appearance Model) algorithm and/or image morphological operation processing on the segmentation effect map to obtain a processed segmentation effect map.
Then, the step "fusing the image with preset element materials according to the segmentation effect map to obtain a processed image" may include: and according to the processed segmentation effect graph, fusing the image with preset element materials, such as transparency (Alpha) fusion, to obtain a processed image.
The appearance model algorithm is a feature point extraction method widely applied to the field of pattern recognition, can perform statistical modeling on textures, and further fuses two statistical models of shapes and textures into an appearance model. The image morphology operation processing may include noise reduction processing and/or connected domain analysis, and the segmentation boundary may be smoother and the color transition at the joint of the replacement region may be more natural through the segmentation effect map after the processing such as the appearance model algorithm or the image morphology operation.
It should be noted that "Alpha fusion" in the embodiments of the present invention refers to fusion based on Alpha values, where Alpha is mainly used to specify the transparency level of a pixel. In general, 8 bits may be reserved for the alpha portion of each pixel, with the effective value of alpha in the range of [0, 255], with [0, 255] representing opacity [ 0%, 100% ]. Therefore, a pixel alpha of 0 indicates complete transparency, a pixel alpha of 128 indicates 50% transparency, and a pixel alpha of 255 indicates complete opacity.
As can be seen from the above, after receiving an image processing request, the embodiment may obtain, according to an instruction of the request, a semantic segmentation model corresponding to an element type to be replaced, predict, according to the model, a probability that each pixel in an image belongs to the element type, to obtain an initial probability map, optimize the initial probability map based on a conditional random field, and fuse the image and a preset element material by using a segmentation effect map obtained after the optimization, thereby achieving a purpose of replacing a certain element type part in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example II,
The method described in the first embodiment is further illustrated by way of example.
In this embodiment, the image processing apparatus is specifically integrated in a server, and the element to be replaced is "sky" as an example.
As shown in fig. 2a and 2d, a specific flow of an image processing method may be as follows:
201. the terminal sends an image processing request to the server, wherein the image processing request can indicate information such as images needing to be processed and element types needing to be replaced.
The image processing request may be triggered in various ways, for example, by clicking or sliding a trigger key on a web page or a client interface, or by inputting a preset instruction.
For example, taking a trigger key click to trigger, referring to fig. 2b, when a user needs to replace a sky part in picture a with another element, such as a "space" element or add a "cloud," the user may upload picture a and click the trigger key "play once" to trigger generation of an image processing request, and send the image processing request to the server, where the image processing request indicates that an image to be processed is image a and the type of the element to be replaced is "sky.
It should be noted that, in this embodiment, the element to be replaced is taken as "sky" for example, and it should be understood that the type of the element to be replaced may also be other types, such as "portrait", "eye", or "plant", and the like, and the implementation thereof is similar to this, and is not described herein again.
202. After receiving the image processing request, the server acquires a semantic segmentation model corresponding to the sky, wherein the semantic segmentation model is formed by training a deep neural network.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and is acquired by the image processing apparatus when the semantic segmentation model is needed to be used, or the semantic segmentation model may also be built by the image processing apparatus, for example, training data including the element type may be acquired, for example, a certain number of pictures including the sky are collected, and then, according to the training data (i.e., pictures including the sky), a preset semantic segmentation initial model is trained by using a deep neural network, so as to obtain the semantic segmentation model corresponding to the "sky".
It should be noted that the preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
203. The server imports the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the "sky".
For example, in step 202, if the received image processing request indicates that the image to be processed is the image a, then the image a may be imported into the semantic segmentation model corresponding to the "sky" in a three-channel color image manner to predict the probability that each pixel in the image a belongs to the "sky", and then step 204 is executed.
204. And the server sets the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, it may be specifically determined whether the probability is greater than a preset threshold, if so, the color of the corresponding pixel on the preset mask is set as a first color, if not, the color of the corresponding pixel on the preset mask is set as a second color, and after it is determined that the colors of all pixels on the preset mask in the image are completely set, the preset mask after the colors are set is output, so as to obtain an initial probability map.
For example, if the probability that a certain pixel K belongs to the "sky" is greater than 80%, the color of the pixel K on the preset mask may be set to be a first color, otherwise, if the probability that a certain pixel K belongs to the "sky" is less than or equal to 80%, the color of the pixel K on the preset mask may be set to be a second color, and so on.
The first color and the second color may also be determined according to the requirements of practical applications, for example, the first color may be set to white, and the second color may be set to black, or the first color may also be set to pink, and the second color may also be set to green, and so on.
For example, if the first color is set to white and the second color is set to black, the initial probability map shown in fig. 2c can be obtained after the picture a is imported into the semantic segmentation model.
205. And the server optimizes the initial probability map based on the conditional random field to obtain a segmentation effect map.
For example, the server may map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
Because the conditional random field is an undirected graph model, each pixel in the image can correspond to a node in the conditional random field, and the prior information including parameters such as color, texture, position and the like is preset, so that pixels with similar edge constraints among the nodes have similar segmentation results, and therefore, the segmentation results of the pixels in the initial probability graph can be adjusted according to the similarity of the edge constraints, so that the sky segmentation results are more precise, for example, referring to fig. 2c, and after the initial probability graph is optimized based on the conditional random field, a segmentation effect graph with a more precise segmentation results can be obtained.
206. The server performs appearance model algorithm and/or image morphology operation processing on the segmentation effect map to obtain a processed segmentation effect map, and then executes step 207.
The image morphology operation processing may include processing such as noise reduction processing and/or connected component analysis. By the segmentation effect graph after processing such as an appearance model algorithm or image morphology operation, the segmentation boundary can be smoother, and the color transition at the joint of the replacement region can be more natural.
It should be noted that step 206 is optional, and if step 206 is not executed, step 207 may be directly executed after step 205 is executed, and in step 208, the segmentation effect map, the image, and the element material are fused by a fusion algorithm to obtain a processed image.
207. And the server acquires the replaceable element material according to a preset strategy.
The preset policy may be set according to requirements of actual applications, for example, a material selection instruction triggered by a user may be received, and then, corresponding materials are obtained from a material library according to the material selection instruction, and the corresponding materials are used as replaceable element materials.
Optionally, in order to increase the diversity of the element material, the element material may also be obtained by a random interception method, for example, the server may obtain a candidate image, then perform random interception on the candidate image, and use the intercepted image as a replaceable element material, and so on.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
208. And the server fuses the processed segmentation effect graph, the processed image and the element material through a fusion algorithm to obtain the processed image.
For example, the first color is white, and the second color is black, in this case, the server may combine the white portion in the segmentation effect map with the acquired element material by using a fusion algorithm to obtain a first combination map, combine the black portion in the segmentation effect map with the image a by using a fusion algorithm to obtain a second combination map, and then combine the first combination map and the second combination map to obtain a processed image.
Because the probability that the pixels of the white portion belong to the "sky" is high, at this time, the pixels of the white portion may be replaced with the acquired element materials through a fusion algorithm, and because the probability that the pixels of the black portion belong to the "sky" is low, at this time, the pixels of the white portion may be combined with the original image a through the fusion algorithm, that is, the pixels of the white portion are retained, so that after the first combination image and the second combination image are synthesized, the "sky" in the original image a may be replaced with corresponding element materials, for example, the "sky" in the image a is replaced with "night sky in christmas", and the like, see fig. 2d, which is not described herein again.
It should be noted that, optionally, as shown in fig. 2d, in order to improve the fusion effect or implement other special effect effects, before combining the black portion (i.e., the second color portion) with the image a, a certain preprocessing may be performed on the image a, such as performing color transformation, contrast adjustment, brightness adjustment, saturation adjustment, and/or adding other special effect masks, and then, the black portion is combined with the preprocessed image a by using a fusion algorithm to obtain a second combined diagram, which is not described herein again.
209. And the server sends the processed image to the terminal.
For example, the processed image may be displayed on an interface of the corresponding client. Optionally, the server may further provide a corresponding saving path and/or a sharing interface for a user to protect and/or share, for example, the processed image may be saved in a cloud or locally (i.e., in a terminal), and shared to a microblog, a friend circle, and/or inserted into a chat conversation interface of an instant chat tool, and so on, which are not described herein again.
As can be seen from the above, after an image processing request is received, a semantic segmentation model corresponding to "sky" can be obtained according to an instruction of the request, a probability that each pixel in an image belongs to "sky" is predicted according to the model to obtain an initial probability map, then, the initial probability map is optimized based on a conditional random field, and the image and a preset element material are fused by using a segmentation effect map obtained after the optimization, so that the purpose of replacing the "sky" part in the image with the preset element material is achieved; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example III,
In order to better implement the above method, an embodiment of the present invention further provides an image processing apparatus, which may be specifically integrated in a server or the like.
As shown in fig. 3a, the image processing apparatus includes a receiving unit 301, an obtaining unit 302, a predicting unit 303, an optimizing unit 304, and a fusing unit 305, as follows:
(1) a receiving unit 301;
a receiving unit 301 configured to receive an image processing request indicating information such as an image that needs to be processed and an element type that needs to be replaced.
(2) An acquisition unit 302;
an obtaining unit 302, configured to obtain a semantic segmentation model corresponding to the element type, where the semantic segmentation model is trained by a deep neural network.
For example, if the image processing request received by the receiving unit 301 indicates that the element type requiring replacement is "sky", at this time, the obtaining unit 302 may obtain a semantic segmentation model corresponding to "sky", and if the image processing request received by the receiving unit 301 indicates that the element type requiring replacement is "portrait", at this time, the obtaining unit 302 may obtain a semantic segmentation model corresponding to "portrait", and so on, which are not listed here.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, as shown in fig. 3b, the image processing apparatus may further include a model building unit 306, as follows:
the model establishing unit 306 may be configured to establish a semantic segmentation model corresponding to the element type, for example, specifically, the following model is established:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
The preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
(3) A prediction unit 303;
the predicting unit 303 is configured to predict, according to the semantic segmentation model, a probability that each pixel in the image belongs to the element type, so as to obtain an initial probability map.
For example, the prediction unit 303 may include a prediction subunit and a setting subunit, as follows:
and a prediction subunit, configured to import the image into the semantic segmentation model to predict a probability that each pixel in the image belongs to the element type.
For example, if the element type is "sky", then at this time, the prediction subunit may introduce the image into a semantic segmentation model corresponding to "sky" to predict the probability that each pixel in the image belongs to "sky".
And the setting subunit is used for setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, the setting subunit may be specifically configured to determine whether the probability is greater than a preset threshold, and if so, set a color of the corresponding pixel on a preset mask as a first color; if not, setting the color of the corresponding pixel on the preset mask as a second color; and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
The preset threshold may be set according to the requirement of the actual application, and the first color and the second color may also be determined according to the requirement of the actual application, for example, the first color may be set to white, the second color may be set to black, and so on.
(4) An optimization unit 304;
and the optimizing unit 304 is configured to optimize the initial probability map based on the conditional random field to obtain a segmentation effect map.
For example, the optimization unit 304 may be specifically configured to map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
(5) A fusion unit 305;
and a fusion unit 305, configured to fuse the image with a preset element material according to the segmentation effect map, so as to obtain a processed image.
For example, the fusion unit 305 may include a material acquisition subunit, a first fusion subunit, a second fusion subunit, and a composition subunit, as follows:
the material obtaining subunit is configured to obtain a replaceable element material according to a preset policy.
The preset policy may be set according to requirements of actual applications, for example, the material obtaining subunit may be specifically configured to receive a material selection instruction triggered by a user, obtain a corresponding material from a material library according to the material selection instruction, and use the material as a replaceable element material.
Optionally, in order to increase the diversity of the element material, the element material may also be obtained in a random interception manner, that is:
the material obtaining subunit is specifically configured to obtain a candidate image, randomly intercept the candidate image, and use the intercepted image as a replaceable element material.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
The first blending subunit may be configured to combine, by using a blending algorithm, the first color part in the segmentation effect map with the acquired element material to obtain a first combined map.
The second fusion subunit may be configured to combine the second color part in the segmentation effect map with the image through a fusion algorithm to obtain a second combination map.
The combining subunit may be configured to combine the first combination map and the second combination map to obtain a processed image.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; that is, as shown in fig. 3b, the image processing apparatus may further include a preprocessing unit 307 as follows:
the preprocessing unit 307 may be configured to perform an appearance model algorithm and/or an image morphology operation on the segmentation effect map to obtain a processed segmentation effect map.
Then, the fusion unit 305 may be specifically configured to fuse the image with the preset element material according to the processed segmentation effect map to obtain a processed image.
The image morphological operation processing may include processing such as noise reduction processing and/or connected domain analysis, which is not described herein again.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in this embodiment, after receiving an image processing request, the obtaining unit 302 obtains a semantic segmentation model corresponding to an element type to be replaced according to an instruction of the request, and the prediction unit 303 predicts a probability that each pixel in an image belongs to the element type according to the model to obtain an initial probability map, and then the optimization unit 304 optimizes the initial probability map based on a conditional random field, and the fusion unit 305 fuses the image and a preset element material by using a segmentation effect map obtained after the optimization, so as to achieve a purpose of replacing a certain element type portion in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example four,
An embodiment of the present invention further provides a server, as shown in fig. 4, which shows a schematic structural diagram of the server according to the embodiment of the present invention, specifically:
the server may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the server architecture shown in FIG. 4 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the server. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The server further includes a power supply 403 for supplying power to each component, and preferably, the power supply 403 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 401 in the server loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
the method comprises the steps of receiving an image processing request, wherein the image processing request indicates an image to be processed and an element type to be replaced, obtaining a semantic segmentation model corresponding to the element type, the semantic segmentation model is formed by training a deep neural network, predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map, optimizing the initial probability map based on a conditional random field to obtain a segmentation effect map, and fusing the image and preset element materials according to the segmentation effect map to obtain a processed image.
For example, a replaceable element material may be obtained according to a preset policy, then a first color portion in the segmentation effect map is combined with the obtained element material through a fusion algorithm to obtain a first combination map, a second color portion in the segmentation effect map is combined with the image through the fusion algorithm to obtain a second combination map, and then the first combination map and the second combination map are combined to obtain a processed image.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, the processor 401 may further run an application program stored in the memory 402, so as to implement the following functions:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
The preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; that is, the processor 401 may also run an application program stored in the memory 402, thereby implementing the following functions:
the segmentation effect graph is subjected to an appearance model algorithm and/or image morphological operation processing to obtain a processed segmentation effect graph, so that during subsequent fusion, the image and preset element materials can be fused according to the processed segmentation effect graph to obtain a processed image, which is detailed in the foregoing embodiment and is not repeated herein.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, after receiving an image processing request, the server in this embodiment may obtain, according to an instruction of the request, a semantic segmentation model corresponding to an element type to be replaced, predict, according to the model, a probability that each pixel in an image belongs to the element type, to obtain an initial probability map, optimize the initial probability map based on a conditional random field, and fuse the image with a preset element material by using a segmentation effect map obtained after the optimization, thereby achieving a purpose of replacing a certain element type in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The foregoing detailed description is directed to an image processing method and apparatus according to an embodiment of the present invention, and the principles and embodiments of the present invention are described herein by using specific examples, which are merely used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (16)
1. An image processing method, comprising:
receiving an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
obtaining a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network;
predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map;
optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
2. The method according to claim 1, wherein the predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map comprises:
importing the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the element type;
and setting the color of the corresponding pixel on a preset mask according to the probability to obtain an initial probability map.
3. The method of claim 2, wherein the setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map comprises:
determining whether the probability is greater than a preset threshold;
if so, setting the color of the corresponding pixel on the preset mask as a first color;
if not, setting the color of the corresponding pixel on the preset mask as a second color;
and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
4. The method of claim 1, wherein the optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map comprises:
mapping pixels in the initial probability map to nodes in a conditional random field;
determining similarity of edge constraints between nodes;
and adjusting the segmentation result of the pixels in the initial probability map according to the similarity of the edge constraint to obtain a segmentation effect map.
5. The method according to any one of claims 1 to 4, wherein the fusing the image with preset element materials according to the segmentation effect map to obtain a processed image comprises:
acquiring replaceable element materials according to a preset strategy;
combining a first color part in the segmentation effect graph with the obtained element materials through a fusion algorithm to obtain a first combination graph;
combining a second color part in the segmentation effect graph with the image through a fusion algorithm to obtain a second combination graph;
and synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
6. The method of claim 5, wherein the obtaining replaceable element material according to the preset strategy comprises:
acquiring a candidate image, randomly intercepting the candidate image, and taking the intercepted image as a replaceable element material; or,
and receiving a material selection instruction triggered by a user, and acquiring corresponding materials from a material library according to the material selection instruction to serve as replaceable element materials.
7. The method according to any one of claims 1 to 4, wherein before the fusing the image with preset element materials according to the segmentation effect map to obtain the processed image, the method further comprises:
performing appearance model algorithm and/or image morphological operation processing on the segmentation effect graph to obtain a processed segmentation effect graph;
the fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image comprises the following steps: and according to the processed segmentation effect graph, fusing the image with preset element materials to obtain a processed image.
8. The method according to any one of claims 1 to 4, wherein before the obtaining the semantic segmentation model corresponding to the element type, the method further comprises:
acquiring training data containing the element types;
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
9. An image processing apparatus characterized by comprising:
a receiving unit configured to receive an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
the acquisition unit is used for acquiring a semantic segmentation model corresponding to the element type, and the semantic segmentation model is formed by training a deep neural network;
the prediction unit is used for predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map;
the optimization unit is used for optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and the fusion unit is used for fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
10. The apparatus of claim 9, wherein the prediction unit comprises a prediction subunit and a setting subunit;
the prediction subunit is configured to import the image into the semantic segmentation model to predict a probability that each pixel in the image belongs to the element type;
and the setting subunit is used for setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
11. The apparatus according to claim 10, wherein the setting subunit is specifically configured to:
determining whether the probability is greater than a preset threshold;
if so, setting the color of the corresponding pixel on the preset mask as a first color;
if not, setting the color of the corresponding pixel on the preset mask as a second color;
and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
12. The apparatus of claim 9,
the optimization unit is specifically configured to map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
13. The apparatus according to any one of claims 9 to 12, wherein the fusion unit includes a material acquisition subunit, a first fusion subunit, a second fusion subunit, and a synthesis subunit;
the material acquisition subunit is used for acquiring replaceable element materials according to a preset strategy;
the first fusion subunit is configured to combine, by using a fusion algorithm, the first color part in the segmentation effect map with the acquired element material to obtain a first combination map;
the second fusion subunit is configured to combine, by using a fusion algorithm, the second color part in the segmentation effect map with the image to obtain a second combination map;
and the synthesis subunit is used for synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
14. The apparatus of claim 13,
the material acquisition subunit is specifically configured to acquire a candidate image, randomly intercept the candidate image, and use the intercepted image as a replaceable element material; or,
the material obtaining subunit is specifically configured to receive a material selection instruction triggered by a user, and obtain a corresponding material from a material library according to the material selection instruction, where the material is used as a replaceable element material.
15. The apparatus of any one of claims 9 to 12, further comprising a pre-processing unit;
the preprocessing unit is used for performing appearance model algorithm and/or image morphological operation processing on the segmentation effect graph to obtain a processed segmentation effect graph;
and the fusion unit is specifically used for fusing the image and the preset element material according to the processed segmentation effect graph to obtain a processed image.
16. The apparatus according to any one of claims 9 to 12, further comprising a model building unit;
the model establishing unit is used for acquiring training data containing the element types, and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element types.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710199165.XA CN107025457B (en) | 2017-03-29 | 2017-03-29 | Image processing method and device |
PCT/CN2018/080446 WO2018177237A1 (en) | 2017-03-29 | 2018-03-26 | Image processing method and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710199165.XA CN107025457B (en) | 2017-03-29 | 2017-03-29 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107025457A true CN107025457A (en) | 2017-08-08 |
CN107025457B CN107025457B (en) | 2022-03-08 |
Family
ID=59525827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710199165.XA Active CN107025457B (en) | 2017-03-29 | 2017-03-29 | Image processing method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107025457B (en) |
WO (1) | WO2018177237A1 (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507201A (en) * | 2017-09-22 | 2017-12-22 | 深圳天琴医疗科技有限公司 | A kind of medical image cutting method and device |
CN107705334A (en) * | 2017-08-25 | 2018-02-16 | 北京图森未来科技有限公司 | A kind of video camera method for detecting abnormality and device |
CN107993191A (en) * | 2017-11-30 | 2018-05-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
CN108305260A (en) * | 2018-03-02 | 2018-07-20 | 苏州大学 | Detection method, device and the equipment of angle point in a kind of image |
WO2018177237A1 (en) * | 2017-03-29 | 2018-10-04 | 腾讯科技(深圳)有限公司 | Image processing method and device, and storage medium |
CN108764143A (en) * | 2018-05-29 | 2018-11-06 | 北京字节跳动网络技术有限公司 | Image processing method, device, computer equipment and storage medium |
CN109741347A (en) * | 2018-12-30 | 2019-05-10 | 北京工业大学 | A kind of image partition method of the iterative learning based on convolutional neural networks |
WO2019109524A1 (en) * | 2017-12-07 | 2019-06-13 | 平安科技(深圳)有限公司 | Foreign object detection method, application server, and computer readable storage medium |
CN110163862A (en) * | 2018-10-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image, semantic dividing method, device and computer equipment |
CN110310222A (en) * | 2019-06-20 | 2019-10-08 | 北京奇艺世纪科技有限公司 | A kind of image Style Transfer method, apparatus, electronic equipment and storage medium |
CN110544218A (en) * | 2019-09-03 | 2019-12-06 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110610495A (en) * | 2018-06-15 | 2019-12-24 | 北京京东尚科信息技术有限公司 | Image processing method and system and electronic equipment |
CN110910334A (en) * | 2018-09-15 | 2020-03-24 | 北京市商汤科技开发有限公司 | Instance segmentation method, image processing device and computer readable storage medium |
CN110930296A (en) * | 2019-11-20 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110956221A (en) * | 2019-12-17 | 2020-04-03 | 北京化工大学 | Small sample polarization synthetic aperture radar image classification method based on deep recursive network |
CN111052028A (en) * | 2018-01-23 | 2020-04-21 | 深圳市大疆创新科技有限公司 | System and method for automatic surface and sky detection |
CN111210434A (en) * | 2019-12-19 | 2020-05-29 | 上海艾麒信息科技有限公司 | Image replacement method and system based on sky identification |
CN111354059A (en) * | 2020-02-26 | 2020-06-30 | 北京三快在线科技有限公司 | Image processing method and device |
CN111445486A (en) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111507946A (en) * | 2020-04-02 | 2020-08-07 | 浙江工业大学之江学院 | Element data driven flower type pattern rapid generation method based on similarity sample |
CN111862045A (en) * | 2020-07-21 | 2020-10-30 | 上海杏脉信息科技有限公司 | Method and device for generating blood vessel model |
CN111915636A (en) * | 2020-07-03 | 2020-11-10 | 闽江学院 | Method and device for positioning and dividing waste target |
CN112258380A (en) * | 2019-07-02 | 2021-01-22 | 北京小米移动软件有限公司 | Image processing method, device, equipment and storage medium |
CN112866573A (en) * | 2021-01-13 | 2021-05-28 | 京东方科技集团股份有限公司 | Display, fusion display system and image processing method |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598678B (en) * | 2018-12-25 | 2023-12-12 | 维沃移动通信有限公司 | Image processing method and device and terminal equipment |
CN111489359B (en) * | 2019-01-25 | 2023-05-30 | 银河水滴科技(北京)有限公司 | Image segmentation method and device |
CN111832587B (en) * | 2019-04-18 | 2023-11-14 | 北京四维图新科技股份有限公司 | Image semantic annotation method, device and storage medium |
CN110992371B (en) * | 2019-11-20 | 2023-10-27 | 北京奇艺世纪科技有限公司 | Portrait segmentation method and device based on priori information and electronic equipment |
CN111461996B (en) * | 2020-03-06 | 2023-08-29 | 合肥师范学院 | Quick intelligent color matching method for image |
CN113554658B (en) * | 2020-04-23 | 2024-06-14 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111598902B (en) * | 2020-05-20 | 2023-05-30 | 抖音视界有限公司 | Image segmentation method, device, electronic equipment and computer readable medium |
CN111832745B (en) * | 2020-06-12 | 2023-08-01 | 北京百度网讯科技有限公司 | Data augmentation method and device and electronic equipment |
CN112037142B (en) * | 2020-08-24 | 2024-02-13 | 腾讯科技(深圳)有限公司 | Image denoising method, device, computer and readable storage medium |
CN112508964B (en) * | 2020-11-30 | 2024-02-20 | 北京百度网讯科技有限公司 | Image segmentation method, device, electronic equipment and storage medium |
CN112800499B (en) * | 2020-12-02 | 2023-12-26 | 杭州群核信息技术有限公司 | Diatom ooze pattern high-order design method based on image processing and real-time material generation capability |
CN112633142B (en) * | 2020-12-21 | 2024-09-06 | 广东电网有限责任公司电力科学研究院 | Power transmission line violation building identification method and related device |
CN112819741B (en) * | 2021-02-03 | 2024-03-08 | 四川大学 | Image fusion method and device, electronic equipment and storage medium |
CN112861885B (en) * | 2021-03-25 | 2023-09-22 | 北京百度网讯科技有限公司 | Image recognition method, device, electronic equipment and storage medium |
CN113129319B (en) * | 2021-04-29 | 2023-06-23 | 北京市商汤科技开发有限公司 | Image processing method, device, computer equipment and storage medium |
CN113657401B (en) * | 2021-08-24 | 2024-02-06 | 凌云光技术股份有限公司 | Probability map visualization method and device for defect detection |
CN114445313A (en) * | 2022-01-28 | 2022-05-06 | 北京百度网讯科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN117437338B (en) * | 2023-10-08 | 2024-08-20 | 书行科技(北京)有限公司 | Special effect generation method, device and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
CN104133956A (en) * | 2014-07-25 | 2014-11-05 | 小米科技有限责任公司 | Method and device for processing pictures |
CN104463843A (en) * | 2014-10-31 | 2015-03-25 | 南京邮电大学 | Interactive image segmentation method of android system |
CN104636761A (en) * | 2015-03-12 | 2015-05-20 | 华东理工大学 | Image semantic annotation method based on hierarchical segmentation |
EP2996085A1 (en) * | 2014-09-09 | 2016-03-16 | icoMetrix NV | Method and system for analyzing image data |
CN105574513A (en) * | 2015-12-22 | 2016-05-11 | 北京旷视科技有限公司 | Character detection method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8014590B2 (en) * | 2005-12-07 | 2011-09-06 | Drvision Technologies Llc | Method of directed pattern enhancement for flexible recognition |
CN102486827B (en) * | 2010-12-03 | 2014-11-05 | 中兴通讯股份有限公司 | Extraction method of foreground object in complex background environment and apparatus thereof |
CN103116754B (en) * | 2013-01-24 | 2016-05-18 | 浙江大学 | Batch images dividing method and system based on model of cognition |
CN107025457B (en) * | 2017-03-29 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Image processing method and device |
-
2017
- 2017-03-29 CN CN201710199165.XA patent/CN107025457B/en active Active
-
2018
- 2018-03-26 WO PCT/CN2018/080446 patent/WO2018177237A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
CN104133956A (en) * | 2014-07-25 | 2014-11-05 | 小米科技有限责任公司 | Method and device for processing pictures |
EP2996085A1 (en) * | 2014-09-09 | 2016-03-16 | icoMetrix NV | Method and system for analyzing image data |
CN104463843A (en) * | 2014-10-31 | 2015-03-25 | 南京邮电大学 | Interactive image segmentation method of android system |
CN104636761A (en) * | 2015-03-12 | 2015-05-20 | 华东理工大学 | Image semantic annotation method based on hierarchical segmentation |
CN105574513A (en) * | 2015-12-22 | 2016-05-11 | 北京旷视科技有限公司 | Character detection method and device |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018177237A1 (en) * | 2017-03-29 | 2018-10-04 | 腾讯科技(深圳)有限公司 | Image processing method and device, and storage medium |
CN107705334A (en) * | 2017-08-25 | 2018-02-16 | 北京图森未来科技有限公司 | A kind of video camera method for detecting abnormality and device |
CN107705334B (en) * | 2017-08-25 | 2020-08-25 | 北京图森智途科技有限公司 | Camera abnormity detection method and device |
CN107507201A (en) * | 2017-09-22 | 2017-12-22 | 深圳天琴医疗科技有限公司 | A kind of medical image cutting method and device |
CN107993191B (en) * | 2017-11-30 | 2023-03-21 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN107993191A (en) * | 2017-11-30 | 2018-05-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
WO2019109524A1 (en) * | 2017-12-07 | 2019-06-13 | 平安科技(深圳)有限公司 | Foreign object detection method, application server, and computer readable storage medium |
CN111052028B (en) * | 2018-01-23 | 2022-04-05 | 深圳市大疆创新科技有限公司 | System and method for automatic surface and sky detection |
CN111052028A (en) * | 2018-01-23 | 2020-04-21 | 深圳市大疆创新科技有限公司 | System and method for automatic surface and sky detection |
CN108305260A (en) * | 2018-03-02 | 2018-07-20 | 苏州大学 | Detection method, device and the equipment of angle point in a kind of image |
CN108305260B (en) * | 2018-03-02 | 2022-04-12 | 苏州大学 | Method, device and equipment for detecting angular points in image |
CN108764143A (en) * | 2018-05-29 | 2018-11-06 | 北京字节跳动网络技术有限公司 | Image processing method, device, computer equipment and storage medium |
CN108764143B (en) * | 2018-05-29 | 2020-11-24 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN110610495A (en) * | 2018-06-15 | 2019-12-24 | 北京京东尚科信息技术有限公司 | Image processing method and system and electronic equipment |
CN110610495B (en) * | 2018-06-15 | 2022-06-07 | 北京京东尚科信息技术有限公司 | Image processing method and system and electronic equipment |
CN110910334A (en) * | 2018-09-15 | 2020-03-24 | 北京市商汤科技开发有限公司 | Instance segmentation method, image processing device and computer readable storage medium |
CN110910334B (en) * | 2018-09-15 | 2023-03-21 | 北京市商汤科技开发有限公司 | Instance segmentation method, image processing device and computer readable storage medium |
CN110163862B (en) * | 2018-10-22 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Image semantic segmentation method and device and computer equipment |
CN110163862A (en) * | 2018-10-22 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Image, semantic dividing method, device and computer equipment |
CN109741347A (en) * | 2018-12-30 | 2019-05-10 | 北京工业大学 | A kind of image partition method of the iterative learning based on convolutional neural networks |
CN109741347B (en) * | 2018-12-30 | 2021-03-16 | 北京工业大学 | Iterative learning image segmentation method based on convolutional neural network |
CN110310222A (en) * | 2019-06-20 | 2019-10-08 | 北京奇艺世纪科技有限公司 | A kind of image Style Transfer method, apparatus, electronic equipment and storage medium |
CN112258380A (en) * | 2019-07-02 | 2021-01-22 | 北京小米移动软件有限公司 | Image processing method, device, equipment and storage medium |
CN110544218A (en) * | 2019-09-03 | 2019-12-06 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110544218B (en) * | 2019-09-03 | 2024-02-13 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110930296B (en) * | 2019-11-20 | 2023-08-08 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110930296A (en) * | 2019-11-20 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110956221A (en) * | 2019-12-17 | 2020-04-03 | 北京化工大学 | Small sample polarization synthetic aperture radar image classification method based on deep recursive network |
CN111210434A (en) * | 2019-12-19 | 2020-05-29 | 上海艾麒信息科技有限公司 | Image replacement method and system based on sky identification |
CN111354059A (en) * | 2020-02-26 | 2020-06-30 | 北京三快在线科技有限公司 | Image processing method and device |
CN111354059B (en) * | 2020-02-26 | 2023-04-28 | 北京三快在线科技有限公司 | Image processing method and device |
CN111445486A (en) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111445486B (en) * | 2020-03-25 | 2023-10-03 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111507946A (en) * | 2020-04-02 | 2020-08-07 | 浙江工业大学之江学院 | Element data driven flower type pattern rapid generation method based on similarity sample |
CN111915636B (en) * | 2020-07-03 | 2023-10-24 | 闽江学院 | Method and device for positioning and dividing waste targets |
CN111915636A (en) * | 2020-07-03 | 2020-11-10 | 闽江学院 | Method and device for positioning and dividing waste target |
CN111862045A (en) * | 2020-07-21 | 2020-10-30 | 上海杏脉信息科技有限公司 | Method and device for generating blood vessel model |
CN112866573A (en) * | 2021-01-13 | 2021-05-28 | 京东方科技集团股份有限公司 | Display, fusion display system and image processing method |
CN112866573B (en) * | 2021-01-13 | 2022-11-04 | 京东方科技集团股份有限公司 | Display, fusion display system and image processing method |
Also Published As
Publication number | Publication date |
---|---|
CN107025457B (en) | 2022-03-08 |
WO2018177237A1 (en) | 2018-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107025457B (en) | Image processing method and device | |
KR102469295B1 (en) | Remove video background using depth | |
CN111832745B (en) | Data augmentation method and device and electronic equipment | |
CN112132197B (en) | Model training, image processing method, device, computer equipment and storage medium | |
CN110555896B (en) | Image generation method and device and storage medium | |
TWI672638B (en) | Image processing method, non-transitory computer readable medium and image processing system | |
CN112419170A (en) | Method for training occlusion detection model and method for beautifying face image | |
Li et al. | Globally and locally semantic colorization via exemplar-based broad-GAN | |
CN109710255B (en) | Special effect processing method, special effect processing device, electronic device and storage medium | |
CN111080746A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
WO2021106855A1 (en) | Data generation method, data generation device, model generation method, model generation device, and program | |
CN115170390B (en) | File stylization method, device, equipment and storage medium | |
CN114973349A (en) | Face image processing method and training method of face image processing model | |
CN117557708A (en) | Image generation method, device, storage medium and computer equipment | |
CN110211063B (en) | Image processing method, device, electronic equipment and system | |
CN110163049B (en) | Face attribute prediction method, device and storage medium | |
US20230131418A1 (en) | Two-dimensional (2d) feature database generation | |
CN116188914A (en) | Image AI processing method in meta-universe interaction scene and meta-universe interaction system | |
CN115272057A (en) | Training of cartoon sketch image reconstruction network and reconstruction method and equipment thereof | |
CN114764821A (en) | Moving object detection method, moving object detection device, electronic apparatus, and storage medium | |
CN116823869A (en) | Background replacement method and electronic equipment | |
CN113408452A (en) | Expression redirection training method and device, electronic equipment and readable storage medium | |
CN113706399A (en) | Face image beautifying method and device, electronic equipment and storage medium | |
KR102554442B1 (en) | Face synthesis method and system | |
WO2024055676A1 (en) | Image compression method and apparatus, computer device, and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |