CN110930296B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN110930296B
CN110930296B CN201911141216.9A CN201911141216A CN110930296B CN 110930296 B CN110930296 B CN 110930296B CN 201911141216 A CN201911141216 A CN 201911141216A CN 110930296 B CN110930296 B CN 110930296B
Authority
CN
China
Prior art keywords
image
target
information
network model
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911141216.9A
Other languages
Chinese (zh)
Other versions
CN110930296A (en
Inventor
侯允
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911141216.9A priority Critical patent/CN110930296B/en
Publication of CN110930296A publication Critical patent/CN110930296A/en
Application granted granted Critical
Publication of CN110930296B publication Critical patent/CN110930296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The embodiment of the application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and belongs to the field of image processing. The method comprises the following steps: acquiring an image to be processed; taking the image as input of a segmentation network model, and determining target segmentation information of the image through the segmentation network model, wherein the target segmentation information comprises position information of a foreground region, position information of a background region and position information of an unknown region; taking the image and the target segmentation information of the image as input of a matting network model, and determining target probability information of the image through the matting network model, wherein the target probability information comprises the probability that each pixel point in the image belongs to a specified target; and processing the designated target in the image according to the target probability information of the image. According to the method and the device, the specified target can be finely segmented and processed, the accuracy of target identification and the fineness of target processing are improved, and then the image processing effect is improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
Embodiments of the present disclosure relate to the field of image processing, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
In the field of image processing, it is often necessary to process a specified target in an image, for example, to perform a face-beautifying process on a face in an image, or to perform color conversion on hair of a face in an image, or the like. However, before the specified target in the image is processed, the specified target needs to be accurately identified from the image before the specified target can be processed.
Currently, target segmentation algorithms are commonly used to identify specified targets in images. Specifically, an image to be processed and with a specified target is acquired, then a target segmentation algorithm is adopted to identify whether each pixel point in the image belongs to the specified target or to a background outside the specified target, and an area surrounded by the pixels belonging to the specified target is determined as a contour area of the specified target, so that the specified target is identified from the background.
However, when the target segmentation algorithm is adopted to perform target recognition on the image, only an absolute recognition result of whether each pixel belongs to or does not belong to the designated target can be given, but in the actual recognition process, the shape of the designated target may be irregular, so that some pixels belonging to the designated target may be mistakenly recognized as not belonging to the designated target, or some pixels not belonging to the designated target may be mistakenly recognized as belonging to the designated target, and further, the recognized contour area is inaccurate, and the image processing effect is affected.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed, wherein a specified target exists in the image;
determining target segmentation information of the image through a segmentation network model, wherein the target segmentation information comprises position information of a foreground area identified as a specified target, position information of a background area identified as a non-specified target and position information of an unknown area which cannot be identified as a background or a foreground, and the segmentation network model is used for determining the target segmentation information of any image;
the image and the target segmentation information of the image are used as input of a matting network model, the target probability information of the image is determined through the matting network model, the target probability information of the image comprises the probability that each pixel point in the image belongs to the appointed target, and the matting network model is used for determining the target probability information of any image;
and processing the designated target in the image according to the target probability information of the image.
In another aspect, there is provided an image processing apparatus including:
the first acquisition module is used for acquiring an image to be processed, wherein a specified target exists in the image;
a segmentation module, configured to use the image as an input of a segmentation network model, and determine target segmentation information of the image through the segmentation network model, where the target segmentation information includes location information of a foreground region identified as a specified target, location information of a background region identified as a non-specified target, and location information of an unknown region that cannot be identified as a background or a foreground, and the segmentation network model is configured to determine target segmentation information of any image;
the image matting module is used for taking the image and the target segmentation information of the image as input of a matting network model, determining target probability information of the image through the matting network model, wherein the target probability information of the image comprises the probability that each pixel point in the image belongs to the specified target, and the matting network model is used for determining target probability information of any image;
and the processing module is used for processing the specified target in the image according to the target probability information of the image.
In another aspect, an electronic device is provided that includes a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image processing method described above.
In another aspect, a computer readable storage medium is provided, the storage medium storing at least one instruction for execution by a processor to implement the above-described image processing method.
In another aspect, a computer program product is provided, storing at least one instruction for execution by a processor to implement the above-described image processing method.
The technical scheme that this application provided can bring following beneficial effect at least:
in the embodiment of the application, the image to be processed can be used as the input of the segmentation network model, the target segmentation information of the image is determined through the segmentation network model, then the image and the target segmentation information of the image are used as the input of the matting network model, the target probability information of the image is determined through the matting network model, and finally the specified targets in the image are processed with different intensities according to the target probability information of the image, namely the probability that each pixel point belongs to the specified target, so that the specified targets can be finely segmented and processed, the accuracy of target identification and the fineness of target processing are improved, and the image processing effect is further improved.
Drawings
FIG. 1 is a flow chart of a model training method provided in an embodiment of the present application;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a color conversion of a hair region of a close-up portrait according to an embodiment of the present application;
fig. 4 is a block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Before describing the image processing method provided in the embodiment of the present application in detail, an application scenario of the embodiment of the present application will be described.
The method and the device are applied to scenes for performing special processing on the specified targets in the images, such as performing face beautifying processing on faces in the images or performing color change on hairs of the faces in the images. Of course, the specified target may be set as another target, the special processing manner may be set as another processing manner, and the specific processing manner may be set according to actual needs, which is not limited in the embodiment of the present application.
Next, an implementation environment related to the embodiments of the present application will be described.
The image processing method provided by the embodiment of the application can be applied to an image processing device, the image processing device can be electronic equipment such as a terminal or a server, the terminal can be a mobile phone, a tablet personal computer or a computer, and the server can be a background server of the application. As an example, an image processing application is installed in a terminal, and the terminal may execute the image processing method provided in the embodiment of the present application through the installed image processing application.
The image processing method provided by the embodiment of the application is an image processing method based on deep learning, before the designated targets in the images are processed, the designated targets in the images are required to be identified and segmented by a segmentation network model and a matting network model, the segmentation network model is used for determining target segmentation information of the images, and the matting network model is used for determining target probability information of the images according to the images and the target segmentation information of the images. Next, a model training method of the segmentation network model and the matting network model is introduced.
Fig. 1 is a flowchart of a model training method provided in an embodiment of the present application, where the method is applied to an image processing apparatus, and the apparatus may be an electronic device such as a terminal or a server, as shown in fig. 1, and the method includes the following steps:
step 101: target probability information of a plurality of sample images is acquired, wherein the target probability information of each sample image comprises the probability that each pixel point in each sample image belongs to a specified target.
The plurality of sample images are images with specified targets in the images, and the plurality of sample images accord with model training requirements and are used as training samples for model training. The specified target may be preset, for example, the specified target may be a face or hair, etc., and of course, the specified target may also be set as another target according to actual needs, which is not limited in the embodiment of the present application.
As one example, if the specified target is hair, a plurality of close-range face images may be collected, and the collected close-range face images may be used as a plurality of sample images for model training.
It should be noted that, because some specified targets have irregular shapes, it is difficult to accurately identify the target by the target segmentation algorithm, for example, when the specified targets are hairs, the target segmentation algorithm can only identify the rough outline of the hair area, and cannot segment the fine hair.
As one example, the target probability information of the sample image may be presented in the form of an alpha (alpha) map in which the probability that each pixel in the sample image belongs to a specified target is stored. For example, the alpha map may be a bitmap in which each pixel in the sample image is stored in 16 bits, for example, for each pixel in the sample image, 5 bits may represent red, 5 bits represent green, 5 bits represent blue, and the last bit is alpha, which indicates the probability that the corresponding pixel belongs to the specified target.
As an example, the target probability information of the plurality of sample images may be obtained by manually labeling the specified target in the sample images by a technician, or may be obtained by identifying the images by other target identification algorithms. For example, a technician may use image processing software to make fine labeling on a specified target area in the sample image, so as to obtain target probability information of the sample image. By way of example, the image processing software may be PS software or the like.
As one example, the target probability information of the sample image may be 0 or 1.0 is used for indicating that the corresponding pixel point does not belong to the specified target, namely belongs to the background; and 1 is used for indicating that the corresponding pixel belongs to a specified target. As another example, the target probability information of the sample image may be located within the interval of [0,1 ].
Step 102: the target segmentation information of each sample image is determined according to the target probability information of each sample image in the plurality of sample images, and the target segmentation information comprises position information of a foreground region identified as a specified target, position information of a background region identified as a non-specified target and position information of an unknown region which cannot be identified as a background or a foreground.
That is, the target segmentation information indicates segmentation information of foreground, background, and unknown regions in the sample image, and may indicate which partial region in the sample image belongs to the foreground, which partial region belongs to the background, and which partial region is a position region where whether it is the foreground or the background cannot be accurately identified.
As an example, the target segmentation information of the sample image may be presented in a form of a trimap (a static image matting algorithm) map in which the target segmentation information of the sample image is stored. For example, in a trimap image of a sample image, a foreground region is displayed with a first color, a background region is displayed with a second color, an unknown region is displayed with a third color, and the foreground region, the background region, and the position region are distinguished by different colors.
As an example, the target probability information for each sample image may be eroded and/or dilated to obtain target segmentation information for each sample image. For example, the alpha map of each sample image is eroded and/or inflated to obtain a trimap map of each sample image.
The target probability information of each sample image is corroded and/or expanded, so that noise of each sample image can be eliminated, and fine segmentation of the foreground and the background is realized.
As another example, when determining the target segmentation information of each sample image according to the target probability information of each sample image in the plurality of sample images, for each pixel in the sample image, if the target probability information of the pixel is greater than or equal to the first probability threshold, determining the pixel as a pixel of the foreground region, if the target probability information of the pixel is less than the first probability threshold and greater than the second probability threshold, determining the pixel as a pixel of the unknown region, and if the target probability information of the pixel is less than the second probability threshold, determining the pixel as a pixel of the background region. Wherein the first probability threshold is greater than the second probability threshold.
The first probability threshold and the second probability threshold may be preset, for example, the first probability threshold and the second probability threshold are located in the interval of [0,1 ]. For example, the first probability threshold is 1 and the second probability threshold is 0.
Step 103: and training the segmentation network model to be trained according to the plurality of sample images and the target segmentation information of the plurality of sample images to obtain a segmentation network model.
The segmentation network model to be trained and the segmentation network model are deep learning network models, for example, may be a CNN (Convolutional Neural Networks, convolutional neural network) model or an RNN (Recurrent Neural Network ) model. The segmentation network model to be trained and the segmentation network model are illustratively a deep lab model (a semantic segmentation model combining CNN with a probability map model), such as a deep labv3+ (third version of deep lab) model.
By training the to-be-trained segmented network model according to the plurality of sample images and the target segmentation information of the plurality of sample images, the to-be-trained segmented network model can continuously learn the relation between the sample images and the target segmentation information of the sample images in the training process, and further the segmented network model capable of determining the target segmentation information of any image is obtained.
As an example, in the model training process, a plurality of sample images may be used as input of the to-be-trained segmented network model, prediction target segmentation information of the plurality of sample images is output through the to-be-trained segmented network model, then prediction errors between the prediction target segmentation information and the target segmentation information of the plurality of sample images are determined, the prediction errors are counter-propagated through a counter-propagation algorithm, so as to adjust model parameters of the to-be-trained segmented network model, and the to-be-trained segmented network model with the adjusted model parameters is determined as a trained segmented network model.
As one example, the back propagation algorithm is a random gradient descent method.
Step 104: and training the to-be-trained matting network model according to the plurality of sample images, the target segmentation information and the target probability information of the plurality of sample images to obtain the matting network model.
The to-be-trained matting network model and the matting network model are deep learning network models, for example, may be deep learning network models such as CNN or RNN. For example, the training matting network model and the matting network model are mbv_dim network models (a deep learning network model).
According to the multiple sample images, the target segmentation information and the target probability information of the multiple sample images, the to-be-trained matting network model is trained, so that the to-be-trained matting network model can continuously learn the relation among the sample images, the target segmentation information of the sample images and the target probability information of the sample images in the training process, and further the matting network model capable of determining the target probability information of any image according to the any image and the target segmentation information of the any image is obtained.
As an example, in the model training process, a plurality of sample images and target segmentation information of the plurality of sample images may be used as input of a to-be-trained matting network model, prediction target probability information of the plurality of sample images is output through the to-be-trained matting network model, prediction errors between the prediction target probability information and the target probability information of the plurality of sample images are determined, and the prediction errors are counter-propagated through a counter-propagation algorithm to adjust model parameters of the to-be-trained matting network model, and the to-be-trained matting network model with the adjusted model parameters is determined as a trained matting network model.
After the training of the segmentation network model and the matting network model is completed, the image can be processed according to the trained segmentation network model and the matting network model. Next, the image processing procedure of the embodiment of the present application will be described in detail.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, where the method is applied to an image processing apparatus, as shown in fig. 2, and includes the following steps:
step 201: and acquiring an image to be processed, wherein a specified target exists in the image.
The image to be processed can be a photo or an portrait, and can also be a video frame image in a video. In addition, the image to be processed may be uploaded by the user, may be obtained from a storage space of the device, may be sent by other devices, or may be obtained from a network, and the method for obtaining the image to be processed is not limited in this embodiment of the present application.
The specified target may be preset, for example, the specified target may be a face or a hair, and the specified target may, of course, be set as another target according to actual needs, which is not limited in the embodiment of the present application.
Step 202: the image is used as an input of a segmentation network model, and target segmentation information of the image is determined through the segmentation network model, wherein the target segmentation information comprises position information of a foreground region identified as a specified target, position information of a background region identified as a non-specified target and position information of an unknown region which cannot be identified as a background or a foreground.
The segmentation network model is used for determining target segmentation information of any image. The input of the segmentation network model is an image, the output is the target segmentation information of the image, and after the image to be processed is input into the segmentation network model, the segmentation network model can output the target segmentation information of the image.
As an example, the target segmentation information of the image may be presented in a form of a trimap map in which target segmentation information of the sample image is stored. For example, in a trimap image, a foreground region is shown in a first color, a background region is shown in a second color, an unknown region is shown in a third color, and the foreground region, the background region, and the location region are distinguished by different colors.
Step 203: and taking the image and the target segmentation information of the image as input of a matting network model, and determining the target probability information of the image through the matting network model, wherein the target probability information of the image comprises the probability that each pixel point in the image belongs to a specified target.
The matting network model is used for determining target probability information of any image. The input of the matting network model is the image and the target segmentation information of the image, the input is the target probability information of the image, and after the image and the target segmentation information of the image are input into the matting network model, the matting network model can output the target probability information of the image.
As one example, the target probability information for an image may be presented in an alpha map in which the probability that each pixel in the image belongs to a specified target is stored. For example, the alpha map may be a bitmap in which each pixel in the image is stored in 16 bits, for example, for each pixel in the image, 5 bits may represent red, 5 bits represent green, 5 bits represent blue, and the last bit is alpha, which indicates the probability that the corresponding pixel belongs to the specified target.
Step 204: and processing the designated target in the image according to the target probability information of the image.
That is, the processing of different intensities may be performed on the specified target in the image according to the target probability information of the image, for example, for each pixel in the image, the greater the probability that the pixel belongs to the specified target, the greater the processing intensity for the pixel, and the smaller the probability that the pixel belongs to the specified target, the lesser the processing intensity for the pixel. Thus, the refinement of the specified object in the image can be realized.
As an example, if the processing manner is color change, the operation of processing the specified target in the image according to the target probability information of the image may include: determining a target color value into which a specified target is to be transformed; determining a second color value of each pixel point in the image according to the first color value of each pixel point in the image, the probability that each pixel point belongs to a specified target and the target color value; the first color value of each pixel point in the image is transformed into a second color value to perform color transformation on a specified target in the image.
As one example, determining the second color value for each pixel in the image based on the first color value for each pixel in the image, the probability that each pixel belongs to a specified target, and the target color value includes: for a target pixel point in the image, adding the first product and the second product to obtain a second color value of the target pixel point; the first product is the product between the probability that the target pixel belongs to the specified target and the target color value, and the second product is the product between the probability that the target pixel belongs to the background and the first color value of the target pixel.
The probability that the target pixel belongs to the background is determined according to the probability that the target pixel belongs to the specified target, for example, a difference value between 1 and the probability that the target pixel belongs to the specified target is determined as the probability that the target pixel belongs to the background.
As an example, if the image to be processed is a close-range portrait, the designated target is hair, the target segmentation information of the image is presented in a trimap image, and the target probability information is presented in an alpha image, the process of performing color change on the hair region of the close-range portrait may be as shown in fig. 3. As shown in fig. 3, a near-field image may be first used as an input of a segmentation network model, the near-field image is segmented by the segmentation network model, a trimap image of the near-field image is output, then the near-field image and the trimap image of the near-field image are used as an input of a matting network model, the near-field image is matting processed by the trimap image of the near-field image of the matting network model, an alpha image of the near-field image is obtained, and finally a fine color transformation is performed on a hair region in the near-field image according to the alpha image of the near-field image.
Therefore, the fine matting effect on the hairline level in the hair area can be achieved, and when the user performs color conversion on the hair area, the color conversion of the hairline is more natural and real, and the user experience is improved. In addition, the embodiment of the application generates the trimap image based on the segmentation algorithm, and then adopts the matting algorithm to realize fine hair level matting based on the trimap image, so that the whole process can realize automatic processing, and further the effect of fine hair color conversion can be realized according to the preference of a user.
In the embodiment of the application, the image to be processed can be used as the input of the segmentation network model, the target segmentation information of the image is determined through the segmentation network model, then the image and the target segmentation information of the image are used as the input of the matting network model, the target probability information of the image is determined through the matting network model, and finally the specified targets in the image are processed with different intensities according to the target probability information of the image, namely the probability that each pixel point belongs to the specified target, so that the specified targets can be finely segmented and processed, the accuracy of target identification and the fineness of target processing are improved, and the image processing effect is further improved.
Fig. 4 is a block diagram of an image processing apparatus provided in an embodiment of the present application, and as shown in fig. 4, the apparatus includes a first acquisition module 401, a segmentation module 402, a matting module 403, and a processing module 404.
A first obtaining module 401, configured to obtain an image to be processed, where a specified target exists in the image;
a segmentation module 402, configured to take the image as an input of a segmentation network model, and determine target segmentation information of the image through the segmentation network model, where the target segmentation information includes location information of a foreground region identified as a specified target, location information of a background region identified as a non-specified target, and location information of an unknown region that cannot be identified as a background or a foreground, and the segmentation network model is used to determine target segmentation information of any image;
a matting module 403, configured to take the image and the target segmentation information of the image as input of a matting network model, determine target probability information of the image through the matting network model, where the target probability information of the image includes probabilities that each pixel point in the image belongs to the specified target, and the matting network model is used to determine target probability information of any image;
and a processing module 404, configured to process the specified target in the image according to the target probability information of the image.
Optionally, the processing module includes:
a first determining unit configured to determine a target color value into which the specified target is to be transformed;
the second determining unit is used for determining a second color value of each pixel point in the image according to the first color value of each pixel point in the image, the probability that each pixel point belongs to the appointed target and the target color value;
and the transformation unit is used for transforming the first color value of each pixel point in the image into a second color value so as to perform color transformation on the appointed target in the image.
Optionally, the second determining unit is configured to:
adding the first product and the second product to a target pixel point in the image to obtain a second color value of the target pixel point;
the first product is a product between a probability that the target pixel belongs to the specified target and the target color value, and the second product is a product between a probability that the target pixel belongs to the background and the first color value of the target pixel, and the probability that the target pixel belongs to the background is determined according to the probability that the target pixel belongs to the specified target.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring target probability information of a plurality of sample images;
the determining module is used for determining target segmentation information of each sample image according to the target probability information of each sample image in the plurality of sample images;
the first training module is used for training the segmentation network model to be trained according to the plurality of sample images and the target segmentation information of the plurality of sample images to obtain the segmentation network model.
Optionally, the determining module is configured to:
and carrying out corrosion and/or expansion processing on the target probability information of each sample image to obtain target segmentation information of each sample image.
Optionally, the apparatus further comprises:
and the second training module is used for training the to-be-trained matting network model according to the plurality of sample images, the target segmentation information and the target probability information of the plurality of sample images to obtain the matting network model.
Alternatively, the specified target is hair or a human face.
In the embodiment of the application, the image to be processed can be used as the input of the segmentation network model, the target segmentation information of the image is determined through the segmentation network model, then the image and the target segmentation information of the image are used as the input of the matting network model, the target probability information of the image is determined through the matting network model, and finally the specified targets in the image are processed with different intensities according to the target probability information of the image, namely the probability that each pixel point belongs to the specified target, so that the specified targets can be finely segmented and processed, the accuracy of target identification and the fineness of target processing are improved, and the image processing effect is further improved.
It should be noted that: in the image processing apparatus provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application, where the electronic device may be a terminal or a server, and the terminal may be a mobile phone, a tablet computer, a computer, or the like. The electronic device may include one or more processors 501 and one or more memories 502, where the memories 502 store at least one instruction, and the at least one instruction is loaded and executed by the processors 501 to implement the image processing method provided in the foregoing method embodiments. Of course, the electronic device may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Embodiments of the present application also provide a computer readable medium storing at least one instruction that is loaded and executed by the processor to implement the image processing method described in the above embodiments.
Embodiments of the present application also provide a computer program product storing at least one instruction that is loaded and executed by the processor to implement the image processing method described in the above embodiments.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (10)

1. An image processing method, the method comprising:
acquiring an image to be processed, wherein hair exists in the image;
determining target segmentation information of the image through a segmentation network model, wherein the target segmentation information comprises position information of a foreground area identified as hair, position information of a background area identified as non-hair and position information of an unknown area which cannot be identified as background or foreground, and the segmentation network model is used for determining the target segmentation information of any image;
the image and the target segmentation information of the image are used as input of a matting network model, the target probability information of the image is determined through the matting network model, the target probability information of the image comprises the probability that each pixel point in the image belongs to the hair, and the matting network model is used for determining the target probability information of any image;
determining a target color value into which the hair is to be transformed;
adding the first product and the second product to a target pixel point in the image to obtain a second color value of the target pixel point; the first product is a product between the probability that the target pixel belongs to the hair and the target color value, the second product is a product between the probability that the target pixel belongs to the background and the first color value of the target pixel, and the probability that the target pixel belongs to the background is determined according to the probability that the target pixel belongs to the hair;
and transforming the first color value of each pixel point in the image into a second color value so as to perform color transformation on the hair in the image.
2. The method of claim 1, wherein prior to determining target segmentation information for the image by the segmentation network model, further comprising:
acquiring target probability information of a plurality of sample images;
determining target segmentation information of each sample image according to target probability information of each sample image in the plurality of sample images;
and training the segmentation network model to be trained according to the plurality of sample images and the target segmentation information of the plurality of sample images to obtain the segmentation network model.
3. The method of claim 2, wherein determining the target segmentation information for each sample image based on the target probability information for each sample image of the plurality of sample images comprises:
and carrying out corrosion and/or expansion processing on the target probability information of each sample image to obtain target segmentation information of each sample image.
4. A method as in claim 1 wherein prior to said determining target probability information for said image by said matting network model, further comprising:
and training the to-be-trained matting network model according to the plurality of sample images, the target segmentation information and the target probability information of the plurality of sample images to obtain the matting network model.
5. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an image to be processed, wherein hair exists in the image;
a segmentation module for taking the image as an input of a segmentation network model, determining target segmentation information of the image through the segmentation network model, wherein the target segmentation information comprises position information of a foreground area identified as hair, position information of a background area identified as non-hair and position information of an unknown area which cannot be identified as background or foreground, and the segmentation network model is used for determining target segmentation information of any image;
the image matting module is used for taking the image and the target segmentation information of the image as input of a matting network model, determining target probability information of the image through the matting network model, wherein the target probability information of the image comprises probabilities that all pixel points in the image belong to the hair, and the matting network model is used for determining target probability information of any image;
a processing module for determining a target color value into which the hair is to be transformed; adding the first product and the second product to a target pixel point in the image to obtain a second color value of the target pixel point; the first product is a product between the probability that the target pixel belongs to the hair and the target color value, the second product is a product between the probability that the target pixel belongs to the background and the first color value of the target pixel, and the probability that the target pixel belongs to the background is determined according to the probability that the target pixel belongs to the hair; and transforming the first color value of each pixel point in the image into a second color value so as to perform color transformation on the hair in the image.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the second acquisition module is used for acquiring target probability information of a plurality of sample images;
the determining module is used for determining target segmentation information of each sample image according to the target probability information of each sample image in the plurality of sample images;
the first training module is used for training the segmentation network model to be trained according to the plurality of sample images and the target segmentation information of the plurality of sample images to obtain the segmentation network model.
7. The apparatus of claim 6, wherein the means for determining is configured to:
and carrying out corrosion and/or expansion processing on the target probability information of each sample image to obtain target segmentation information of each sample image.
8. The apparatus of claim 5, wherein the apparatus further comprises:
and the second training module is used for training the to-be-trained matting network model according to the plurality of sample images, the target segmentation information and the target probability information of the plurality of sample images to obtain the matting network model.
9. An electronic device comprising a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image processing method of any one of claims 1 to 4.
10. A computer readable storage medium storing at least one instruction for execution by a processor to implement the image processing method of any one of claims 1 to 4.
CN201911141216.9A 2019-11-20 2019-11-20 Image processing method, device, equipment and storage medium Active CN110930296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911141216.9A CN110930296B (en) 2019-11-20 2019-11-20 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911141216.9A CN110930296B (en) 2019-11-20 2019-11-20 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110930296A CN110930296A (en) 2020-03-27
CN110930296B true CN110930296B (en) 2023-08-08

Family

ID=69851170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911141216.9A Active CN110930296B (en) 2019-11-20 2019-11-20 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110930296B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462161B (en) * 2020-03-31 2023-09-26 厦门亿联网络技术股份有限公司 System, method, storage medium and equipment for extracting real-time video image
CN111627098B (en) * 2020-05-21 2023-04-07 广州光锥元信息科技有限公司 Method and device for identifying water flow area in image and generating dynamic water flow video
CN112819848B (en) * 2021-02-04 2024-01-05 Oppo广东移动通信有限公司 Matting method, matting device and electronic equipment
CN112990331A (en) * 2021-03-26 2021-06-18 共达地创新技术(深圳)有限公司 Image processing method, electronic device, and storage medium
CN113191938B (en) * 2021-04-29 2022-11-15 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113362365A (en) * 2021-06-17 2021-09-07 云从科技集团股份有限公司 Video processing method, system, device and medium
CN113657402B (en) * 2021-10-18 2022-02-01 北京市商汤科技开发有限公司 Image matting processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139415A (en) * 2015-09-29 2015-12-09 小米科技有限责任公司 Foreground and background segmentation method and apparatus of image, and terminal
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107256555A (en) * 2017-05-25 2017-10-17 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN107945202A (en) * 2017-12-19 2018-04-20 北京奇虎科技有限公司 Image partition method, device and computing device based on adaptive threshold
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN108961303A (en) * 2018-07-23 2018-12-07 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and computer-readable medium
CN109461167A (en) * 2018-11-02 2019-03-12 Oppo广东移动通信有限公司 The training method of image processing model scratches drawing method, device, medium and terminal
CN109658330A (en) * 2018-12-10 2019-04-19 广州市久邦数码科技有限公司 A kind of color development method of adjustment and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139415A (en) * 2015-09-29 2015-12-09 小米科技有限责任公司 Foreground and background segmentation method and apparatus of image, and terminal
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107256555A (en) * 2017-05-25 2017-10-17 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN107945202A (en) * 2017-12-19 2018-04-20 北京奇虎科技有限公司 Image partition method, device and computing device based on adaptive threshold
CN108961303A (en) * 2018-07-23 2018-12-07 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and computer-readable medium
CN109461167A (en) * 2018-11-02 2019-03-12 Oppo广东移动通信有限公司 The training method of image processing model scratches drawing method, device, medium and terminal
CN109658330A (en) * 2018-12-10 2019-04-19 广州市久邦数码科技有限公司 A kind of color development method of adjustment and device

Also Published As

Publication number Publication date
CN110930296A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110930296B (en) Image processing method, device, equipment and storage medium
US11710293B2 (en) Target detection method and apparatus, computer-readable storage medium, and computer device
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN108205655B (en) Key point prediction method and device, electronic equipment and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
WO2020228446A1 (en) Model training method and apparatus, and terminal and storage medium
US20190279014A1 (en) Method and apparatus for detecting object keypoint, and electronic device
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN109448007B (en) Image processing method, image processing apparatus, and storage medium
KR101955919B1 (en) Method and program for providing tht region-of-interest in image by deep-learing algorithm
CN110348358B (en) Skin color detection system, method, medium and computing device
CN114511041B (en) Model training method, image processing method, device, equipment and storage medium
WO2023284608A1 (en) Character recognition model generating method and apparatus, computer device, and storage medium
CN111935479A (en) Target image determination method and device, computer equipment and storage medium
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
CN112330671A (en) Method and device for analyzing cell distribution state, computer equipment and storage medium
CN111104965A (en) Vehicle target identification method and device
CN111222558A (en) Image processing method and storage medium
CN112464924A (en) Method and device for constructing training set
CN114677578A (en) Method and device for determining training sample data
CN116128044A (en) Model pruning method, image processing method and related devices
CN109726741B (en) Method and device for detecting multiple target objects
CN111783519A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant