CN113469876B - Image style migration model training method, image processing method, device and equipment - Google Patents

Image style migration model training method, image processing method, device and equipment Download PDF

Info

Publication number
CN113469876B
CN113469876B CN202110867587.6A CN202110867587A CN113469876B CN 113469876 B CN113469876 B CN 113469876B CN 202110867587 A CN202110867587 A CN 202110867587A CN 113469876 B CN113469876 B CN 113469876B
Authority
CN
China
Prior art keywords
image
style
area
processed
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110867587.6A
Other languages
Chinese (zh)
Other versions
CN113469876A (en
Inventor
方慕园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110867587.6A priority Critical patent/CN113469876B/en
Publication of CN113469876A publication Critical patent/CN113469876A/en
Application granted granted Critical
Publication of CN113469876B publication Critical patent/CN113469876B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a training method, an image processing device, electronic equipment and a storage medium of an image style migration model. The training method of the image style migration model comprises the following steps: acquiring a sample image, a corresponding area image to be processed and an initial style image; the initial style image is an image obtained after image style migration processing is carried out on the area to be processed; shrinking the region to be processed in the region to be processed image to obtain a shrunk region image; obtaining a target style image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contracted area in the sample image; training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model. The method and the device realize accurate identification of the region range of the region to be processed of the picture, and further improve the accuracy of the generated target style image.

Description

Image style migration model training method, image processing method, device and equipment
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a training method of an image style migration model, an image processing method, an image processing device, electronic equipment and a storage medium.
Background
With the development of image processing technology, an image stylizing technology is developed, and generally, different stylizing modes need to be adopted for different areas of an image, for example, the image stylizing technology can be applied to a portrait dyeing task, color conversion is required for hair areas, and other areas need to be kept unchanged.
Currently, image stylization techniques typically train a neural network by generating paired training data. However, because the region identification of the paired pictures is inaccurate, the pictures generated through the neural network are easy to have flaws, and the accuracy of the generated pictures is low.
Disclosure of Invention
The disclosure provides a training method, an image processing device, electronic equipment and a storage medium for an image style migration model, so as to at least solve the problem of low accuracy of pictures generated in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a training method of an image style migration model, including:
Acquiring a sample image, a region image to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image;
performing shrinkage processing on the region to be processed in the region to be processed image to obtain a shrinkage region image;
obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model.
In an exemplary embodiment, the training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model includes: inputting the sample image and the contracted area image into the neural network model, and performing style migration processing on the contracted area in the sample image through the neural network model to obtain a prediction style image; determining a loss value of the neural network model according to the difference value between the prediction style image and the target style image; and according to the loss value, adjusting model parameters of the neural network model to obtain the image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on a contracted area corresponding to the contracted area image in the sample image; training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model, wherein the training comprises the following steps: and training the neural network model according to the sample image, the shrinkage area image and the target style image to obtain a color conversion model.
In an exemplary embodiment, the shrink region image further includes a background region; the background area is an image area except the contraction area in the contraction area image; the obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image includes: acquiring a first area image corresponding to the contracted area from the initial style image, and acquiring a second area image corresponding to the background area from the sample image; and combining the first area image and the second area image to obtain the target style image.
In an exemplary embodiment, the performing shrinkage processing on the to-be-processed area in the to-be-processed area image to obtain a shrinkage area image includes: determining a contraction magnitude for a region to be processed in the region to be processed image; and carrying out shrinkage processing on the region edge of the region to be processed in the region image to be processed based on the shrinkage amplitude to obtain the shrinkage region image.
In an exemplary embodiment, the performing, based on the contraction amplitude, contraction processing on an area edge of the area to be processed in the area to be processed image to obtain the contraction area image includes: based on the contraction amplitude, carrying out corrosion treatment on the edge of the area through a corrosion algorithm to obtain the contraction area image; or setting the pixel value of the region edge to zero based on the contraction amplitude to obtain the contraction region image.
In an exemplary embodiment, the initial style image corresponding to the sample image is obtained by: and inputting the sample image into a style conversion neural network, and performing image style migration processing on a region to be processed in the sample image through the style conversion neural network to obtain the initial style image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing method including:
acquiring an image to be processed and an image of a region to be processed corresponding to the image to be processed;
inputting the image to be processed and the image of the region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained through the training method of the image style migration model according to any embodiment of the first aspect.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the area to be processed; performing style migration processing on a to-be-processed region in the to-be-processed image through the image style migration model to obtain a style migration image, including: and performing color conversion processing on the region to be processed in the image to be processed through the image style migration model to obtain the style migration image.
According to a third aspect of embodiments of the present disclosure, there is provided a training apparatus for an image style migration model, including:
A sample image acquisition unit configured to perform acquisition of a sample image, a region image to be processed corresponding to the sample image, and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image;
a contracted image acquisition unit configured to perform contraction processing on the region to be processed in the region to be processed image, to obtain a contracted region image;
a target image acquisition unit configured to perform obtaining a target style image corresponding to the contracted area image from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and the style model training unit is configured to train the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model.
In an exemplary embodiment, the style model training unit is further configured to perform inputting the sample image and the contracted region image into the neural network model, and perform style migration processing on the contracted region in the sample image through the neural network model to obtain a predicted style image; determining a loss value of the neural network model according to the difference value between the prediction style image and the target style image; and according to the loss value, adjusting model parameters of the neural network model to obtain the image style migration model.
In an exemplary embodiment, the target style image is an image obtained after performing a color conversion process on the contracted area; the style model training unit is further configured to perform training on the neural network model according to the sample image, the contracted area image and the target style image to obtain a color conversion model.
In an exemplary embodiment, the shrink region image further includes a background region; the background area is an image area except the contraction area in the contraction area image; the target image acquisition unit is further configured to perform acquisition of a first region image corresponding to the contracted region from the initial style image and acquisition of a second region image corresponding to the background region from the sample image; and combining the first area image and the second area image to obtain the target style image.
In an exemplary embodiment, the contracted image acquisition unit is further configured to perform determining a contraction amplitude for a region to be processed in the region to be processed image; and carrying out shrinkage processing on the region edge of the region to be processed in the region image to be processed based on the shrinkage amplitude to obtain the shrinkage region image.
In an exemplary embodiment, the contracted image acquiring unit is further configured to perform an etching process on the region edge by an etching algorithm based on the contracted amplitude, to obtain the contracted region image; or setting the pixel value of the region edge to zero based on the contraction amplitude to obtain the contraction region image.
In an exemplary embodiment, the sample image obtaining unit is further configured to perform inputting the sample image into a style conversion neural network, and perform image style migration processing on a region to be processed in the sample image through the style conversion neural network, so as to obtain the initial style image.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
a to-be-processed image acquisition unit configured to perform acquisition of an to-be-processed image and a to-be-processed area image corresponding to the to-be-processed image;
The image style migration unit is configured to input the image to be processed and the image of the region to be processed into an image style migration model, and perform style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained through the training method of the image style migration model according to any one of the embodiments of the first aspect.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the area to be processed; the image style migration unit is further configured to execute color conversion processing on the to-be-processed area in the to-be-processed image through the image style migration model to obtain the style migration image.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement a training method of the image style migration model as described in any one of the embodiments of the first aspect or an image processing method as described in any one of the embodiments of the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of training an image style migration model as described in any one of the embodiments of the first aspect, or the method of image processing as described in any one of the embodiments of the second aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the training method of the image style migration model as described in any of the embodiments of the first aspect, or the image processing method as described in any of the embodiments of the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring a sample image, a region image to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image; performing shrinkage processing on a region to be processed in the region to be processed image to obtain a shrinkage region image; obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image; training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model. According to the method and the device, after the region to be processed corresponding to the sample image is subjected to shrinkage processing, the target style image is obtained by using the obtained shrinkage region image and the sample image, and the neural network model is trained by using the sample image, the shrinkage region image and the target style image, so that the obtained image style migration model can learn the change of the region to be processed, the region range of the region to be processed of the picture is accurately identified, and the accuracy of the generated target style image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flowchart illustrating a method of training an image style migration model, according to an example embodiment.
FIG. 2 is a flowchart illustrating a method of deriving an image style migration model, according to an example embodiment.
FIG. 3 is a flowchart illustrating obtaining a target style image, according to an example embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a portrait dyeing task according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating a training apparatus for an image style migration model, according to an example embodiment.
Fig. 7 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating a training method of an image style migration model according to an exemplary embodiment, and the training method of an image style migration model is used in a terminal as shown in fig. 1, and includes the following steps.
In step S101, a sample image, an area image to be processed corresponding to the sample image, and an initial style image are acquired; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image.
The terminal may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers and portable wearable devices, and the sample image refers to an image which is acquired in advance by the terminal and is not subjected to image style migration processing and used for performing model training, and the image may be acquired by an image acquisition device of the terminal, for example, a camera of the terminal, or may be obtained by downloading the image through a network. The sample image carries an image processing area needing image style migration, namely an area to be processed of the sample image, so that the terminal can perform image style migration processing on the area to be processed of the sample image, and a corresponding initial style image is obtained, wherein the area to be processed image refers to an image corresponding to the area to be processed in the sample image.
Specifically, after a sample image for training an image style migration model is collected in advance, an area image to be processed and an initial style image corresponding to the sample image can be obtained, wherein the area image to be processed can be obtained by dividing different image areas in the sample image through an image dividing algorithm, an area to be processed which needs to be subjected to image style migration and an area image to be processed corresponding to the area to be processed are obtained, and meanwhile, image style migration processing can be performed on the area part to be processed of the sample image, so that the initial style image corresponding to the sample image is obtained.
For example, the image style migration model to be trained may be an image stylized processing model for dyeing a character image, the terminal may collect in advance a character image which is not dyed as a sample image, and take a hair area in the image as a region to be processed, then the terminal may obtain an image corresponding to the hair area through an image segmentation algorithm, may be a mask of the hair area as the region to be processed, and may also dye the hair area in the sample image, or may obtain a dyed character image as an initial style image by manually repairing the image by a designer.
In step S102, the region to be processed in the region to be processed image is subjected to shrinkage processing, and a shrinkage region image is obtained.
After the terminal obtains the image of the area to be processed in step S101, the area to be processed in the image of the area to be processed may be shrunk, so as to obtain an image of the area to be processed after shrinkage, that is, a shrunk area image, where the shrinkage area may be determined by the terminal according to an area with inaccurate identification, for example, a black edge exists in the generated dyed picture due to inaccurate identification of the area edge of the hair area possibly existing in the dyeing task, and the terminal may shrink the area edge of the hair area of the character image, so as to obtain the shrunk area image. And aiming at the problem that the white region identification is inaccurate in the white repairing task, the white region can be contracted, so that a corresponding contracted region image is obtained.
In step S103, a target style image corresponding to the contracted area image is obtained from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
in step S104, training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model.
The target style image may refer to an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image, after the contraction area image is obtained in step S102, the terminal may obtain a target style image corresponding to the contraction area image according to the sample image, the initial style image and the contraction area image, and perform model training on the neural network model according to the sample image, the contraction area image and the obtained target style image, thereby obtaining a trained image style migration model.
For example, after obtaining a contracted region image obtained by contracting the region edge of the hair region of the character image, the terminal may obtain an image for dyeing the contracted hair region as a target style image by using the contracted region image, the sample image, and the initial style image, and perform model training by using the sample image, the contracted region image, and the target style image, thereby obtaining a trained image style migration model for dyeing the hair region of the character image.
In the training method of the image style migration model, a sample image, a region image to be processed corresponding to the sample image and an initial style image are acquired through a terminal; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image; performing shrinkage processing on a region to be processed in the region to be processed image to obtain a shrinkage region image; obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image; training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model. According to the method and the device for obtaining the target style image, after the shrinkage processing is carried out on the region to be processed corresponding to the sample image through the terminal, the target style image is obtained by utilizing the obtained shrinkage region image and the sample image, and the neural network model is trained by utilizing the sample image, the shrinkage region image and the target style image, so that the obtained image style migration model can learn the change of the region to be processed, the region range of the region to be processed of the picture is accurately identified, and the accuracy of the generated target style image is further improved.
In an exemplary embodiment, as shown in fig. 2, step S104 may further include:
in step S201, the sample image and the contracted region image are input into a neural network model, and style migration processing is performed on the contracted region in the sample image through the neural network model, so as to obtain a predicted style image.
The prediction style image is obtained by predicting a neural network model according to a sample image and a shrinkage area image, and the neural network model can carry out image style migration processing on an image area corresponding to the shrinkage area image in the sample image according to the input sample image and the shrinkage area image, so as to output a corresponding prediction style image. For example, the image style migration model to be trained may be an image stylized processing model for dyeing a character image, and after the terminal inputs the sample image and the contracted area image after contraction of the hair area in the sample image into the image stylized processing model, the model may determine a contracted area corresponding to the hair area portion in the sample image according to the contracted area image, and dye the area, thereby obtaining a predicted style image for dyeing the contracted area portion in the hair area of the sample image.
In step S202, determining a loss value of the neural network model according to a difference value between the predicted style image and the target style image;
in step S203, according to the loss value, the model parameters of the neural network model are adjusted to obtain an image style migration model.
After obtaining the predicted style image in step S201, the terminal may perform a difference processing with the target style image obtained in step S103 by using the predicted style image to obtain a difference value between the predicted style image and the target style image, for example, may perform a difference processing with a pixel value of the predicted style image and a pixel value of the target style image to obtain a difference value between the predicted style image and the target style image, and use the difference value as a loss value of the neural network model. And then, the terminal can adjust model parameters of the neural network model based on the obtained loss value until training of the neural network model is completed, and an image style migration model is obtained. For example, a loss value threshold value may be set for the neural network model, training of the model is achieved by comparing the obtained loss value with the loss value threshold value, when the loss value is greater than the set loss value threshold value, the model parameters of the neural network model need to be adjusted again, and the loss value of the neural network model after the model parameter adjustment is obtained again, until the loss value is less than a preset difference value threshold value, the model parameters of the neural network model can be stopped being adjusted, and the neural network model at the moment is used as the training-completed image style migration model.
In this embodiment, the terminal may train the neural network model by using the difference between the predicted style image obtained by the neural network model according to the sample image and the contracted area image and the target style image as the loss value, so that the trained image style migration model may learn the range of the contracted area, thereby accurately identifying the area range of the area to be processed, and further improving the accuracy of outputting the image by the image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on the contracted region; step S104 may further include: training the neural network model according to the sample image, the contracted area image and the target style image to obtain a color conversion model.
In this embodiment, the image style migration model to be trained may be a color conversion model for implementing color conversion of a partial area in an image, that is, a to-be-processed area of the image, where the target style image is an image obtained after color conversion of the to-be-processed area, specifically, the terminal may perform training of a neural network model by using a sample image, a contracted area image, and a target style image obtained after color conversion of the contracted area in the sample image, so as to obtain the color conversion model.
For example, the color conversion model may be a neural network model for performing color conversion processing on a hair region in a character image, so that the terminal may collect a sample character image, perform color conversion processing on the hair region of the sample character image to obtain an initial style image after color conversion of the hair region, and identify a hair region image corresponding to the hair region in the image as an area image to be processed, so that the contracted hair region image and a target style image for performing color conversion on the hair region after contraction of the hair region may be obtained by contracting the hair region image, and finally train a color conversion model for implementing color conversion of the hair region.
In this embodiment, the image style migration model may also be a color conversion model for implementing color conversion, and when color conversion of a partial area of an image needs to be implemented, the area range of the image, where color conversion is required, may be accurately identified by the color conversion model, so that the accuracy of the generated target style image after color conversion may be improved.
In an exemplary embodiment, the shrink region image further includes a background region; the background area is an image area except for the contraction area in the contraction area image; as shown in fig. 3, step S103 may further include:
In step S301, a first region image corresponding to a contracted region is acquired from an initial style image, and a second region image corresponding to a background region is acquired from a sample image.
The shrinkage area image may be composed of a shrinkage area and a background area, the background area refers to an image area except for the shrinkage area in the shrinkage area image, the first area image refers to an area image partially shown by the shrinkage area in the initial style image, and the second area image refers to an area image partially shown by the background area in the sample image. Specifically, the terminal may find, from the initial style image, an area image partially shown with the contracted area as a first area image, and find, from the sample image, an area image shown with an area other than the contracted area as a second area image.
For example, the terminal may extract pixels in the contracted region from the initial style image so as to form a first region image using the pixels extracted from the initial style image, and may also extract pixels other than the contracted region from the sample image so as to form a second region image using the pixels extracted from the sample image.
In step S302, the first region image and the second region image are combined to obtain a target style image.
After the first area image and the second area image are obtained in step S301, the first area image and the second area image may be combined, or the terminal may superimpose the pixel points included in the first area image and the second area image, so as to obtain the target style image.
For example, the target style image may be calculated by the following formula:
B’=B*M’+A*(1-M’)
wherein B ' represents the combined target style image, M ' represents a shrinkage area image, the shrinkage area image comprises a shrinkage area and a background area, wherein pixels in the shrinkage area corresponding to M ' are set to 1, pixels in the background area are set to 0, B represents an initial style image, and A represents a sample image. I.e. the target style image B ', the corresponding regions of all M' =1, i.e. the contracted regions, replicate pixels from the original style image B, whereas the regions of M '=0, i.e. the background regions, employ replication of pixels from the sample image a, thereby generating the target style image B'.
In this embodiment, the initial style image may be used to obtain a first area image corresponding to the contracted area, and the sample image may be used to obtain a second area image corresponding to the background area outside the contracted area, so as to generate the target style image in a combined manner, so that the accuracy of the generated target style image may be improved, and the efficiency of generating the target style image may be improved.
In an exemplary embodiment, step S102 may further include: determining a contraction amplitude for a region to be processed in the region to be processed image; and carrying out shrinkage processing on the region edge of the region to be processed in the region to be processed image based on the shrinkage amplitude to obtain a shrinkage region image.
The contraction amplitude refers to the contraction amplitude of the region to be processed in the region to be processed image, the contraction amplitude can be set by a user according to the requirement, or can be generated randomly by the terminal, and the contraction processing of the region edge of the region to be processed is performed by using the randomly generated contraction amplitude, so that the contraction region image is obtained.
For example, for the same sample image a, the corresponding area image to be processed is the area image a, then the terminal may perform shrinkage processing on the area to be processed in the area image a based on the determined shrinkage amplitude, for example, for the shrinkage amplitude 1 and the shrinkage amplitude 2 respectively, where the shrinkage amplitude 1 and the shrinkage amplitude 2 may be randomly determined by the terminal, that is, the shrinkage processing may be performed on the area edge of the area image a by using the shrinkage amplitude 1 and the shrinkage amplitude 2 respectively, so as to obtain the area image B and the area image C respectively as different shrinkage area images for the sample image a.
In this embodiment, the terminal may obtain the corresponding shrinkage area image through the shrinkage amplitude, so that the generated shrinkage area image may shrink according to the shrinkage amplitude, thereby obtaining the target style image and also shrink according to the shrinkage amplitude, so as to improve diversity of the shrinkage area image and the target style image, and improve diversity of training samples.
Further, performing shrinkage processing on the region edge of the region to be processed in the region to be processed image based on the shrinkage amplitude to obtain a shrinkage region image, and may further include: based on the contraction amplitude, carrying out corrosion treatment on the edge of the region by a corrosion algorithm to obtain a contraction region image; or setting the pixel value of the region edge to zero based on the contraction amplitude to obtain a contraction region image.
In this embodiment, the manner of performing the shrinkage processing on the region edge of the region to be processed may be implemented by an etching algorithm, for example, the etching algorithm may be used to perform the etching processing on the region edge of the region to be processed, which is matched with the shrinkage amplitude, so as to obtain a shrinkage region image, or the zeroing processing may be performed on the pixels of the region edge according to the shrinkage amplitude, so as to obtain the shrinkage region image.
In this embodiment, the terminal may perform the shrinkage on the region edge of the region to be processed by using an etching algorithm, or may perform the processing by performing the zero setting on the pixels on the region edge, so that the efficiency of obtaining the shrunk region image may be improved.
In an exemplary embodiment, the initial style image corresponding to the sample image in step S101 may be obtained by: and inputting the sample image into a style conversion neural network, and performing image style migration on the sample image through the style conversion neural network to obtain an initial style image.
In this embodiment, the style conversion neural network may be an existing image processing model for implementing image style migration on a sample image, where the style image output by the model may have flaws in the output style image due to inaccurate area identification, and since the portion with flaws may be processed by contracting the area to be processed, in order to improve efficiency of the generated initial style image, the sample image may be input into the model by using the style conversion neural network, and the image style migration is performed on the sample image by the model to obtain the initial style image.
In this embodiment, by inputting the sample image into the style conversion neural network, an initial style image may be obtained, which may reduce the workload of the designer and improve the efficiency of obtaining the initial style image compared to the manner in which the designer performs the manual image style migration.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment, which is used in a terminal as shown in fig. 4, including the following steps.
In step S401, an image to be processed and an image of a region to be processed corresponding to the image to be processed are acquired.
The image to be processed refers to an original image needing image style migration processing, when the image to be processed needs to be processed, the image to be processed can be input into a terminal, the terminal can obtain the image to be processed, and determine an image of a region to be processed corresponding to the region to be processed needing image style migration in the image to be processed, for example, the image region of the image to be processed can be segmented through an image segmentation algorithm, so as to obtain the image of the region to be processed.
In step S402, inputting an image to be processed and an image of a region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; an image style migration model is obtained by a training method of the image style migration model according to any one of the embodiments above.
And then, the terminal can input the obtained image to be processed and the image of the region to be processed into a trained image style migration model, wherein the image style migration model is obtained by training a sample image, a contracted region image and a target style image obtained by carrying out image style migration processing on the contracted region of the sample image, and the image style migration model can learn the region range of the image region to be processed, so that the region range needing image style migration can be accurately identified based on the image of the region to be processed, and the image style migration processing can be carried out on the part of the region to be processed in the image to be processed according to the region to be processed in the image of the region to be processed, and the style migration image after the image style migration processing is output.
In the image processing method, the image to be processed and the image of the area to be processed corresponding to the image to be processed are obtained; inputting the image to be processed and the image of the region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; an image style migration model is obtained by a training method of the image style migration model according to any one of the embodiments above. According to the image style migration method and device, the image style migration model is obtained through training of the sample image, the contracted area image and the target style image in advance, so that the image style migration model can learn the change of the area to be processed, the image style migration model can accurately identify the area range of the area to be processed of the picture, and therefore the image style migration process is carried out on the image to be processed through the image style migration model, and the accuracy of the obtained style migration image can be improved.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the area to be processed; step S402 may further include: and performing color conversion processing on a region to be processed in the image to be processed through an image style migration model to obtain a style migration image.
In this embodiment, the image style migration model may be used to perform color conversion on a region to be processed in an image to be processed, and the terminal may perform color conversion on the region to be processed in the image to be processed through the image style migration model, so as to obtain a style migration image.
For example, the image style migration model is used for performing hair dyeing treatment on the hair of the character image, then the image to be treated can be any one character image, when the terminal obtains the character image needing to be dyed, the hair area in the input character image is used as the area to be treated through an image segmentation algorithm, so that a hair area corresponding mask is obtained and used as the area image to be treated, the character image and the obtained hair area mask are input into the image style migration model, and the image style migration model performs color conversion treatment, namely dyeing treatment, on the hair area part in the character image, so that the dyed character image is obtained and used as the style migration image.
In this embodiment, color switching of the region to be processed may be performed by the image style migration model, so that flaws in the style migration image after dyeing switching may be reduced, and accuracy of the style migration image after color switching may be improved.
In an exemplary embodiment, a stylized image edge restoration method based on extrapolation is also provided, where the method may be applied to a portrait dyeing task, where a hair area needs to be color converted, and other areas need to be unchanged. The principle is shown in fig. 5, in the method, the mask is shrunk inwards by a random amplitude when the neural network is trained, and then the dyed graph is shrunk inwards by the dyeing range in the same way, so that the neural network remembers that the dyeing range can follow the change of the mask, and the phenomenon that the edge of the paired picture is dyed incompletely due to inaccurate edge searching at the boundary can be avoided, and black edges exist in training data to cause black edges in the generated picture. And when in use, the mask can be restored to the normal hair range, so that the neural network can dye the edges. The specific flow is as follows:
preparing data:
1) A sufficient number of portrait pictures are taken and the pictures contain the hair area of the person. The pictures come from channels such as mobile phones, camera collection, downloading of internet public pictures and the like.
2) And sending each image picture Ai in the data set into a style conversion neural network (or manually repairing the picture by a designer) to obtain a corresponding style converted picture Bi. The edges of Bi can be in the following cases: the dyeing is complete, the dyeing is not dyed with color, and the dyeing is flawed.
3) And processing Ai by adopting an image segmentation algorithm to obtain a corresponding hair region segmentation result Mi.
4) All Ai and the corresponding Bi are paired one by one to be used as a data set D. I.e. D contains a number of pairs of original pictures Ai, corresponding hair divisions Mi and corresponding dyeing pictures Bi.
Training an AI model:
1) A black hair photo Ai, and a corresponding style-converted picture Bi (Bi is usually flawed at the edges), a hair split picture Mi (hair area 1, others 0) were randomly selected from the dataset.
2) Mi is randomly shrunk to Mi', either by an erosion algorithm or by randomly changing the pixels in a portion of Mi to 0.
3) To obtain a picture of the hair dyeing range matching the Mi' range. A new image Bi ' is generated, the corresponding regions of all Mi ' =1 copy pixels from Bi, and the region of Mi ' =0 uses pixels copied from Ai. I.e. B ' =b ' +a (1-M ').
4) And sending the black hair photos Ai and Mi 'into a neural network G with converted styles to obtain an output Bi', and optimizing the output B 'to a target B' by a random gradient descent method.
5) Repeating the steps 2) to 4) for a plurality of rounds until the difference value between B 'and B' generated by the G network is smaller than a certain threshold value, and ending the iterative process.
Deployment model:
6) And fixing the parameters of G, and deploying the G into actual used equipment, such as collection, cloud end, server and the like.
7) In use, the input picture a and the corresponding hair segment M (not contracted) are fed into the neural network G. And obtaining a picture B' with no black edge.
8) And returning the B 'replacement input diagram A to a user or displaying the B' replacement input diagram on a terminal such as a mobile phone.
In the above embodiment, when training, the mask and the dyeing range are randomly shrunk, and when using, the result of good edge dyeing is obtained by adopting the original mask, compared with the dyeing process of the traditional technology, the black edge at the edge of the original hair dyeing effect in the technical scheme of the present disclosure is obviously reduced, the fusion of the hair and the face is more natural, and simultaneously, the whole repairing process adopts the automatic repairing of the machine, thereby saving a great amount of manpower.
It should be understood that, although the steps in the flowcharts of fig. 1-4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in FIGS. 1-4 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
FIG. 6 is a block diagram illustrating a training apparatus for an image style migration model, according to an example embodiment. Referring to fig. 6, the apparatus includes a sample image acquisition unit 601, a contracted image acquisition unit 602, a target image acquisition unit 603, and a style model training unit 604.
A sample image acquisition unit 601 configured to perform acquisition of a sample image, a region image to be processed corresponding to the sample image, and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image;
a contracted image acquisition unit 602 configured to perform contraction processing of a region to be processed in the region to be processed image, resulting in a contracted region image;
a target image acquisition unit 603 configured to perform acquisition of a target style image corresponding to the contracted area image from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
The style model training unit 604 is configured to perform training on the neural network model according to the sample image, the contracted area image and the target style image, so as to obtain an image style migration model.
In an exemplary embodiment, the style model training unit 604 is further configured to perform inputting the sample image and the contracted region image into a neural network model, and perform style migration processing on the contracted region in the sample image through the neural network model to obtain a predicted style image; determining a loss value of the neural network model according to the difference value between the predicted style image and the target style image; and according to the loss value, adjusting model parameters of the neural network model to obtain an image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on the contracted region; the style model training unit 604 is further configured to perform training of the neural network model according to the sample image, the contracted area image and the target style image, resulting in a color conversion model.
In an exemplary embodiment, the shrink region image further includes a background region; the background area is an image area except for the contraction area in the contraction area image; a target image acquisition unit 603 further configured to perform acquisition of a first region image corresponding to the contracted region from the initial style image, and acquisition of a second region image corresponding to the background region from the sample image; and combining the first area image and the second area image to obtain the target style image.
In an exemplary embodiment, the contracted image acquisition unit 602 is further configured to perform determining a contraction magnitude for a region to be processed in the region to be processed image; and carrying out shrinkage processing on the region edge of the region to be processed in the region to be processed image based on the shrinkage amplitude to obtain a shrinkage region image.
In an exemplary embodiment, the contracted image acquisition unit 602 is further configured to perform an etching process on the region edge by an etching algorithm based on the contraction amplitude, resulting in a contracted region image; or performing the step of setting the pixel value of the region edge to zero based on the contraction amplitude to obtain a contraction region image.
In an exemplary embodiment, the sample image obtaining unit 601 is further configured to perform inputting the sample image into a neural network model, and performing image style migration processing on a region to be processed in the sample image through the neural network model to obtain an initial style image.
Fig. 7 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to fig. 7, the apparatus includes a to-be-processed image acquisition unit 701 and an image style migration unit 702.
A to-be-processed image acquiring unit 701 configured to perform acquisition of an to-be-processed image and a to-be-processed area image corresponding to the to-be-processed image;
An image style migration unit 702 configured to perform inputting an image to be processed and an image of a region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; an image style migration model is obtained by a training method of the image style migration model according to any one of the embodiments above.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the area to be processed; the image style migration unit 702 is further configured to perform color conversion processing on the to-be-processed region in the to-be-processed image through the image style migration model, so as to obtain a style migration image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating an apparatus 800 for image processing model training or for image processing, according to an example embodiment. For example, device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, video, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), erasable Programmable Read Only Memory (EPROM), programmable Read Only Memory (PROM), read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the device 800 and other devices, either wired or wireless. The device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a computer-readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of device 800 to perform the above-described method. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, implements the image processing model training method as described in any of the above or the image processing method as described in any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A method for training an image style migration model, comprising:
Acquiring a sample image, a region image to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image;
performing shrinkage processing on the region to be processed in the region to be processed image to obtain a shrinkage region image; comprising the following steps: determining a contraction magnitude for a region to be processed in the region to be processed image; based on the contraction amplitude, carrying out corrosion treatment on the edge of the area through a corrosion algorithm to obtain the contraction area image; or setting the pixel value of the edge of the region to zero based on the contraction amplitude to obtain the contraction region image;
obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
And training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model.
2. The method of claim 1, wherein training the neural network model based on the sample image, the contracted region image, and the target style image to obtain an image style migration model comprises:
inputting the sample image and the contracted area image into the neural network model, and performing style migration processing on the contracted area in the sample image through the neural network model to obtain a prediction style image;
determining a loss value of the neural network model according to the difference value between the prediction style image and the target style image;
and according to the loss value, adjusting model parameters of the neural network model to obtain the image style migration model.
3. The method according to claim 1, wherein the target style image is an image obtained by performing color conversion processing on a contracted area corresponding to the contracted area image in the sample image;
training the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model, wherein the training comprises the following steps:
And training the neural network model according to the sample image, the shrinkage area image and the target style image to obtain a color conversion model.
4. The method of claim 1, wherein the shrink region image further comprises a background region; the background area is an image area except the contraction area in the contraction area image;
the obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image includes:
acquiring a first area image corresponding to the contracted area from the initial style image, and acquiring a second area image corresponding to the background area from the sample image;
and combining the first area image and the second area image to obtain the target style image.
5. The method according to claim 1, wherein the initial style image corresponding to the sample image is obtained by:
and inputting the sample image into a style conversion neural network, and performing image style migration processing on a region to be processed in the sample image through the style conversion neural network to obtain the initial style image.
6. An image processing method, comprising:
acquiring an image to be processed and an image of a region to be processed corresponding to the image to be processed;
inputting the image to be processed and the image of the region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by a training method of the image style migration model according to any one of claims 1 to 5.
7. The method according to claim 6, wherein the image style migration model is used for performing color conversion processing on the region to be processed;
performing style migration processing on a to-be-processed region in the to-be-processed image through the image style migration model to obtain a style migration image, including:
and performing color conversion processing on the region to be processed in the image to be processed through the image style migration model to obtain the style migration image.
8. An image style migration model training apparatus, comprising:
a sample image acquisition unit configured to perform acquisition of a sample image, a region image to be processed corresponding to the sample image, and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained by performing image style migration processing on the area to be processed in the sample image;
A contracted image acquisition unit configured to perform contraction processing on the region to be processed in the region to be processed image, to obtain a contracted region image; further configured to perform determining a contraction magnitude for a region to be processed in the region to be processed image; based on the contraction amplitude, carrying out corrosion treatment on the edge of the area through a corrosion algorithm to obtain the contraction area image; or setting the pixel value of the edge of the region to zero based on the contraction amplitude to obtain the contraction region image;
a target image acquisition unit configured to perform obtaining a target style image corresponding to the contracted area image from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and the style model training unit is configured to train the neural network model according to the sample image, the contracted area image and the target style image to obtain an image style migration model.
9. The apparatus of claim 8, wherein the style model training unit is further configured to perform inputting the sample image and the contracted region image into the neural network model, and performing style migration processing on the contracted region in the sample image through the neural network model to obtain a predicted style image; determining a loss value of the neural network model according to the difference value between the prediction style image and the target style image; and according to the loss value, adjusting model parameters of the neural network model to obtain the image style migration model.
10. The apparatus according to claim 8, wherein the target style image is an image obtained by performing color conversion processing on a contracted area corresponding to the contracted area image in the sample image; the style model training unit is further configured to perform training on the neural network model according to the sample image, the contracted area image and the target style image to obtain a color conversion model.
11. The apparatus of claim 8, wherein the shrink region image further comprises a background region; the background area is an image area except the contraction area in the contraction area image; the target image acquisition unit is further configured to perform acquisition of a first region image corresponding to the contracted region from the initial style image and acquisition of a second region image corresponding to the background region from the sample image; and combining the first area image and the second area image to obtain the target style image.
12. The apparatus according to claim 8, wherein the sample image acquisition unit is further configured to perform inputting the sample image into a style conversion neural network, and performing image style migration processing on a region to be processed in the sample image through the style conversion neural network, to obtain the initial style image.
13. An image processing apparatus, comprising:
a to-be-processed image acquisition unit configured to perform acquisition of an to-be-processed image and a to-be-processed area image corresponding to the to-be-processed image;
the image style migration unit is configured to input the image to be processed and the image of the region to be processed into an image style migration model, and perform style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by a training method of the image style migration model according to any one of claims 1 to 5.
14. The apparatus of claim 13, wherein the image style migration model is configured to perform color conversion processing on the area to be processed; the image style migration unit is further configured to execute color conversion processing on the to-be-processed area in the to-be-processed image through the image style migration model to obtain the style migration image.
15. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the training method of the image style migration model of any one of claims 1 to 5 or the image processing method of claim 6 or 7.
16. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the training method of the image style migration model of any one of claims 1 to 5, or the image processing method of claim 6 or 7.
CN202110867587.6A 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment Active CN113469876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110867587.6A CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110867587.6A CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN113469876A CN113469876A (en) 2021-10-01
CN113469876B true CN113469876B (en) 2024-01-09

Family

ID=77883277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110867587.6A Active CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113469876B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989104A (en) * 2021-10-25 2022-01-28 北京达佳互联信息技术有限公司 Training method of image style migration model, and image style migration method and device
CN118096505B (en) * 2024-04-28 2024-08-09 厦门两万里文化传媒有限公司 Commodity display picture generation method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325954A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image partition method, device and electronic equipment
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN109934895A (en) * 2019-03-18 2019-06-25 北京海益同展信息科技有限公司 Image local feature moving method and device
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
CN111598091A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
WO2020220807A1 (en) * 2019-04-29 2020-11-05 商汤集团有限公司 Image generation method and apparatus, electronic device, and storage medium
CN112348737A (en) * 2020-10-28 2021-02-09 达闼机器人有限公司 Method for generating simulation image, electronic device and storage medium
CN112734627A (en) * 2020-12-24 2021-04-30 北京达佳互联信息技术有限公司 Training method of image style migration model, and image style migration method and device
KR102260628B1 (en) * 2020-02-13 2021-06-03 이인현 Image generating system and method using collaborative style transfer technology
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109325954A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image partition method, device and electronic equipment
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN109934895A (en) * 2019-03-18 2019-06-25 北京海益同展信息科技有限公司 Image local feature moving method and device
WO2020220807A1 (en) * 2019-04-29 2020-11-05 商汤集团有限公司 Image generation method and apparatus, electronic device, and storage medium
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
KR102260628B1 (en) * 2020-02-13 2021-06-03 이인현 Image generating system and method using collaborative style transfer technology
CN111598091A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
CN112348737A (en) * 2020-10-28 2021-02-09 达闼机器人有限公司 Method for generating simulation image, electronic device and storage medium
CN112734627A (en) * 2020-12-24 2021-04-30 北京达佳互联信息技术有限公司 Training method of image style migration model, and image style migration method and device
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113469876A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
TWI777162B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN110782468B (en) Training method and device of image segmentation model and image segmentation method and device
US10452890B2 (en) Fingerprint template input method, device and medium
EP3057304B1 (en) Method and apparatus for generating image filter
EP3176709A1 (en) Video categorization method and apparatus, computer program and recording medium
RU2648616C2 (en) Font addition method and apparatus
CN113469876B (en) Image style migration model training method, image processing method, device and equipment
CN114240882A (en) Defect detection method and device, electronic equipment and storage medium
US11961278B2 (en) Method and apparatus for detecting occluded image and medium
CN109934240B (en) Feature updating method and device, electronic equipment and storage medium
CN112927122A (en) Watermark removing method, device and storage medium
CN112529846A (en) Image processing method and device, electronic equipment and storage medium
CN112308864A (en) Image processing method and device, electronic equipment and storage medium
CN115982024A (en) Test script generation method, test script generation device, storage medium, and program product
CN113656627B (en) Skin color segmentation method and device, electronic equipment and storage medium
CN113888543B (en) Skin color segmentation method and device, electronic equipment and storage medium
CN112613447B (en) Key point detection method and device, electronic equipment and storage medium
CN115512116B (en) Image segmentation model optimization method and device, electronic equipment and readable storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN113642551A (en) Nail key point detection method and device, electronic equipment and storage medium
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN112446366A (en) Image translation evaluating method and device for image translation evaluating
CN114722570B (en) Sight estimation model establishment method and device, electronic equipment and storage medium
CN111915021B (en) Training and using method and device of image processing model, server and medium
CN112633203B (en) Key point detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant