CN112102149A - Figure hair style replacing method, device, equipment and medium based on neural network - Google Patents
Figure hair style replacing method, device, equipment and medium based on neural network Download PDFInfo
- Publication number
- CN112102149A CN112102149A CN201910528062.2A CN201910528062A CN112102149A CN 112102149 A CN112102149 A CN 112102149A CN 201910528062 A CN201910528062 A CN 201910528062A CN 112102149 A CN112102149 A CN 112102149A
- Authority
- CN
- China
- Prior art keywords
- person
- hairstyle
- image
- neural network
- replaced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 claims abstract description 27
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 71
- 238000004590 computer program Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000036544 posture Effects 0.000 description 30
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
A figure hair style replacing method, a figure hair style replacing device, figure hair style replacing equipment and a figure hair style replacing medium based on a neural network. The method comprises the following steps: extracting a hairstyle outline image and a face posture image of a person; training a preset neural network by utilizing the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model; generating a hairstyle outline image of a replaced person corresponding to the face pose of the person to be replaced by using a hairstyle replacement model according to the hairstyle outline image of the replaced person and the face pose image of the person to be replaced; and replacing the hairstyle of the person to be replaced by utilizing the hairstyle outline image of the person to be replaced corresponding to the face pose of the person to be replaced. According to the method, the device, the equipment and the medium provided by the embodiment of the invention, the hairstyle of the person to be replaced can be replaced, the hairstyle of the person to be replaced in the posture of the person to be replaced can be more accurately obtained, and the visual experience of a user is improved.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a figure hairstyle replacing method, device, equipment and medium based on a neural network.
Background
With the rise of Artificial Intelligence (AI), AI technology is widely used in the fields of medical treatment, communication, and the like. The face image-changing technology has attracted wide attention as one of the specific applications.
Currently, face swapping techniques are mostly capable of only achieving replacement between faces, for example, replacing the face of a person B in a video with the face of a person a, and then enabling the person a to perform a series of actions of the person B in the video. However, some other characteristics of the person a and the person B may be different, such as hairstyle, and after the face of the person B in the video is replaced by the face of the person a, the hairstyle of the person is not replaced because only the face of the person is replaced, so that the person with the replaced face in the video does not look like the person a or the person B.
Therefore, at present, many drawing tools, such as Photoshop, are used to simply edit the hairstyle of the person B in the video according to the hairstyle of the person a, but this processing method cannot accurately obtain the hairstyle of the person a in the posture where the person B is currently located.
Disclosure of Invention
The embodiment of the invention provides a person hairstyle replacing method, device, equipment and medium based on a neural network, which can replace hairstyles of persons to be replaced, can more accurately obtain the hairstyles of the persons to be replaced under postures of the persons to be replaced, further enables the persons to be replaced to be more similar to the persons to be replaced in vision, and improves the visual experience of users.
In one aspect of the embodiments of the present invention, a method for replacing a person's hairstyle based on a neural network is provided, where the method includes:
extracting a hairstyle outline image and a face posture image of a person;
training a preset neural network by utilizing the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model;
generating a hairstyle outline image of a replaced person corresponding to the face pose of the person to be replaced by using a hairstyle replacement model according to the hairstyle outline image of the replaced person and the face pose image of the person to be replaced;
and replacing the hairstyle of the person to be replaced by utilizing the hairstyle outline image of the person to be replaced corresponding to the face pose of the person to be replaced.
In another aspect of the embodiments of the present invention, there is provided a person hairstyle replacing apparatus based on a neural network, the apparatus including:
an image extraction unit for extracting a hairstyle outline image of a person and a face pose image of the person;
the image training unit is used for training a preset neural network by utilizing the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model;
an image generating unit, configured to generate a hairstyle contour image of a replacement person corresponding to a face pose of a person to be replaced, using a hairstyle replacement model, according to a hairstyle contour image of the replacement person and a face pose image of the person to be replaced;
and the image replacing unit is used for replacing the hairstyle of the person to be replaced by utilizing the hairstyle outline image of the person to be replaced, which corresponds to the human face posture of the person to be replaced.
According to another aspect of the embodiments of the present invention, there is provided a neural network-based human hair style replacement apparatus, including:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the neural network-based person hair style replacement method provided as any one of the aspects of the embodiments of the present invention described above.
According to another aspect of embodiments of the present invention, there is provided a computer storage medium having computer program instructions stored thereon, the computer program instructions when executed by a processor implement the neural network-based person hair style replacement method as provided in any one aspect of the embodiments of the present invention.
The person hair style replacing method, the person hair style replacing device, the person hair style replacing equipment and the person hair style replacing medium based on the neural network, which are provided by the embodiment of the invention, can replace the hair style of a person to be replaced, and can more accurately obtain the hair style of the person to be replaced under the posture of the person to be replaced, so that the person to be replaced is more similar to the person to be replaced in vision, and the visual experience of a user is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Figure 1 illustrates a flow diagram of a neural network based person hair style replacement method in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a preset neural network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a figure hair style replacement device based on a neural network according to an embodiment of the present invention;
fig. 4 is a block diagram illustrating an exemplary hardware architecture of a computing device capable of implementing a neural network-based person hair style replacement method and apparatus in accordance with an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The following describes in detail a person hairstyle replacement method, apparatus, device and medium based on a neural network according to an embodiment of the present invention, with reference to the accompanying drawings. It should be noted that these examples are not intended to limit the scope of the present disclosure.
A figure hairstyle replacing method based on a neural network according to an embodiment of the present invention is described in detail with reference to fig. 1.
For better understanding of the present invention, the method for replacing a person's hair style based on a neural network according to an embodiment of the present invention is described in detail below with reference to fig. 1, and fig. 1 is a flowchart illustrating the method for replacing a person's hair style based on a neural network according to an embodiment of the present invention.
As shown in fig. 1, the method for replacing a hairstyle of a person based on a neural network in the embodiment of the present invention includes the following steps:
and S110, extracting the hairstyle outline image of the person and the face posture image of the person.
In one embodiment of the present invention, the hair style contour image is a hair style image capable of representing a hair style contour. Such as long curl profiles, short hair profiles, and the like. The face pose image refers to a left-right turning pose, an up-down raising pose of a person, a head-bending pose of the person in a plane, and the like.
In one embodiment of the invention, the key points can be extracted according to the characteristics of the human face pose of the human being determined by the human face outline of the human being. After all the feature extraction key points are obtained, the face pose image of the person can be extracted according to the feature extraction key points. The hair of the person is processed by the hair segmentation model, and the hair style texture image of the person can be obtained.
In the embodiment of the present invention, since the hairstyle of a person in different postures is affected by the posture of the person, the hairstyle of the person in the side face and the hairstyle of the person in the front face are different. Therefore, the embodiment of the invention respectively extracts the human face posture image of the person and the hairstyle contour image corresponding to the human face posture image, and can more accurately train the neural network.
And S120, training a preset neural network by utilizing the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model.
In an embodiment of the present invention, a Generative Adaptive Networks (GAN) and a Visual Geometry Group (VGG) Network are taken as examples of the predetermined neural Network, and a training process of the predetermined neural Network in an embodiment of the present invention is described in detail.
The GAN network includes generators (G) and discriminators (D), and can be viewed as two network model pairs. G is a network for generating a picture, and D is a judgment network for judging whether it is a real sample. In the training process, the purpose of G is to generate a real picture as much as possible to deceive D, and the purpose of D is to distinguish the picture generated by G from the picture of a real sample.
The VGG network is a deep convolutional neural network, and has a good classification and identification effect by using the convolution kernel size (3 x 3) and the maximum pooling size (2 x 2) with the same size.
In an embodiment of the present invention, in order to make training of the preset neural network more accurate and obtain a more accurate hairstyle replacement model, multiple hairstyle contour images and multiple face pose images of multiple people may be extracted.
When a preset neural network composed of the GAN and the VGG network is trained, one hairstyle contour image can be selected from the extracted multiple hairstyle contour images of multiple persons, and one face pose image can be selected from the multiple face pose images of the same person to which the selected hairstyle contour image belongs. It should be understood that the present invention is not limited to the order of selection of the hair style contour image and the face pose image.
Further, the arbitrarily selected hairstyle contour image and the face pose image are used as input data of a preset neural network. And taking the hairstyle contour image corresponding to the randomly selected face posture image as target data of a preset neural network. The input data and the target data are used as a group of training pairs of the preset neural network, and when the preset neural network is trained, a plurality of groups of training pairs can be trained.
For example, there are M persons, and the corresponding outline images of hair style are FA1、…、FAn,FB1、…、FBn,…,FM1、…、FMn(ii) a The corresponding face posture images are respectively ZA1、…、ZAn,ZB1、…、ZBn,…,ZM1、…、ZMn. In the multiple hair style contour images and the multiple face posture images of the multiple persons, the extracted hair style contour image of the person A is FA1The extracted face pose image of the person A is ZA5. At this time, FA1And ZA5As input data for the preset neural network, ZA5Corresponding hairstyle contour image FA5As target data for presetting the neural network. FA1、ZA5And FA5As a set of training pairs for a predetermined neural network.
As shown in fig. 2, fig. 2 is a schematic diagram of a preset neural network structure according to an embodiment of the present invention. When F is presentA1And ZA5G in a preset neural network formed by the GAN and the VGG network is input, and the G of the GAN network generates a corresponding image which is taken as output data. D in GAN network will utilize ZA5Output data and target data FA5The VGG network will utilize the output data and the target data FA5Respectively calculating first-order norm Loss functions L1Loss, VGG Loss and Structural similarity (Structural similarity)rity index, SSIM) Loss, the corresponding function value. And carrying out weighted summation on the calculated function values to obtain the loss function value of the whole preset neural network.
Further, in an embodiment of the present invention, the VGG network may be used only to calculate the VGG Loss, and the network parameters of the VGG network are not adjusted. Therefore, the network parameters of the VGG network in the preset neural network may be fixed.
And when the loss function value of the whole preset neural network obtained through weighted summation calculation does not meet the preset standard, adjusting the network parameter of the GAN in the preset neural network, so that the loss function value between the output data and the target data obtained through the preset neural network again meets the preset standard, and further finishing the training of the whole preset neural network to obtain the hair style replacement model.
In an embodiment of the present invention, the predetermined criterion of the loss function value may be that the number of times that the difference value between two adjacent loss function values is within a predetermined range reaches a predetermined number of times.
For example, the preset criterion is set to be that the number of times that the difference value of the loss function values obtained in two adjacent times is within [ 0-0.1 ] reaches 3 times. During training, the first loss function value was 0.6, the second loss function value was 0.4, the third loss function value was 0.35, the fourth loss function value was 0.33, the fifth loss function value was 0.32, and the sixth loss function value was 0.31.
Based on this, it can be found that the difference between the first and second loss function values is 0.2, the difference between the second and third loss function values is 0.05, the difference between the third and fourth loss function values is 0.02, the difference between the fourth and fifth loss function values is 0.01, and the difference between the fifth and sixth loss function values is 0.01.
Therefore, the times that the difference value of the loss function values obtained in two adjacent times is between 0 and 0.1 are 4, at this time, the loss function value between the output data and the target data obtained through the preset neural network can be considered to meet the preset standard, the training of the whole preset neural network is completed, and the hair style replacement model can be obtained.
It should be understood that the times mentioned in the embodiments of the present invention are only exemplary, and in the actual training process, tens of times or even tens of times of training processes are often performed to make the loss function meet the preset standard.
In an embodiment of the present invention, the weight of the function value corresponding to each loss function can be further adjusted, so as to obtain a more accurate hair style replacement model.
In another embodiment of the present invention, the function value corresponding to each loss function may be used as the loss function value of the whole preset neural network. For example, the obtained function value corresponding to L1Loss, the function value corresponding to VGG Loss, the function value corresponding to the generated Loss function G Loss, the function value corresponding to the discriminant Loss function D Loss, and the function value corresponding to SSIM Loss are all used as the Loss function values of the entire preset neural network.
When network parameters are adjusted, parameters of G in the GAN network can be adjusted according to a function value corresponding to L1Loss, a function value corresponding to VGGLoss, a function value corresponding to a G Loss of a generation Loss function and a function value corresponding to SSIM Loss, and parameters of D in the GAN network can be adjusted according to a function value corresponding to D Loss.
It should be noted that the present invention is not limited to what network parameters in the GAN network are adjusted, as long as all learnable parameters in the GAN network are adjusted.
And S130, generating a hairstyle outline image of the replaced person corresponding to the human face posture of the person to be replaced by using a hairstyle replacing model according to the hairstyle outline image of the person to be replaced and the human face posture image of the person to be replaced.
In one embodiment of the invention, the hairstyle contour image F of the person A is to be replacedA1And a face pose image Z of the character B to be replacedB6Inputting the hairstyle replacement model, the human face posture Z of the character B to be replaced can be generated by using the hairstyle replacement modelB6The corresponding hair style outline image of the replacement person A.
In the embodiment of the invention, the GAN network is trained by utilizing the hair style outline image of the person and the human face posture image of the person, the network parameters of the GAN network are continuously adjusted based on the loss function value until the loss function value meets the preset standard, a more accurate hair style replacement model is obtained, the hair style outline image of the person and the human face posture image of the person to be replaced can be randomly replaced, the hair style replacement model after training can replace the hair style of the person to be replaced, and the hair style of the person to be replaced in the posture of the person to be replaced can be more accurately obtained.
And S140, replacing the hairstyle of the person to be replaced by using the hairstyle outline image of the person to be replaced corresponding to the human face posture of the person to be replaced.
The SC-FEGAN is a face photo scrawling editing processing mode for face editing based on a GAN network by using a sketch and a selected color of a user.
In an embodiment of the invention, the SC-FEGAN can be edited by using a face photo graffiti to edit a hairstyle contour image of a replacement person corresponding to a face pose of a person to be replaced, so as to obtain a hairstyle image with hairstyle texture features and colors.
And replacing the hairstyle of the person to be replaced by using the obtained hairstyle image with the hairstyle texture characteristics and the color.
In another embodiment of the present invention, the obtained hair style image with hair style texture features and colors may be replaced on the person to be replaced after face replacement, so that not only the face of the person may be replaced, but also the hair style of the person may be replaced.
In the embodiment of the invention, the hairstyle of the person to be replaced is replaced, so that the person to be replaced is more similar to the person to be replaced in vision, and the visual experience of a user can be improved.
The figure hairstyle replacing device based on the neural network according to the embodiment of the present invention will be described in detail with reference to fig. 3, which corresponds to the figure hairstyle replacing method based on the neural network.
Fig. 3 is a schematic structural diagram of a figure hair style replacement device based on a neural network according to an embodiment of the present invention.
As shown in fig. 3, the neural network-based person hair style replacing apparatus includes:
an image extraction unit 310, configured to extract a hairstyle contour image of a person and a face pose image of the person.
And the image training unit 320 is configured to train the preset neural network by using the hairstyle contour image and the human face pose image to obtain a hairstyle replacement model.
An image generating unit 330, configured to generate a hair style contour image of the replacement person corresponding to the face pose of the person to be replaced by using a hair style replacement model according to the hair style contour image of the replacement person and the face pose image of the person to be replaced.
And an image replacing unit 340 for replacing the hairstyle of the person to be replaced by using the hairstyle outline image of the person to be replaced corresponding to the face pose of the person to be replaced.
The figure hairstyle replacing device based on the neural network can extract a hairstyle outline image and a face posture image of a figure, train the GAN network by utilizing the hairstyle outline image and the face posture image of the figure, continuously adjust network parameters of the GAN network based on loss function values until the loss function values meet preset standards, obtain a more accurate hairstyle replacing model, further realize that the hairstyle outline image of the figure and the face posture image of the figure to be replaced can be randomly replaced, carry out hairstyle replacement on the figure to be replaced through the trained hairstyle replacing model, and obtain the hairstyle of the figure to be replaced under the figure posture more accurately.
In one embodiment of the present invention, the image replacing unit 340 further includes:
and the image processing unit is used for editing the hairstyle outline image of the replaced person corresponding to the human face posture of the person to be replaced by utilizing SC-FEGAN to obtain the hairstyle image with hairstyle texture characteristics.
And the image replacing subunit is used for replacing the hairstyle of the person to be replaced by utilizing the hairstyle image.
In one embodiment of the present invention, the image extraction unit 310 further includes:
and the key point determining unit is used for determining the feature extraction key points.
And the image extraction subunit is used for extracting key points based on the features, extracting a human face posture image of the person and processing the hair of the person by using the hair segmentation model to obtain a hair style texture image of the person.
In an embodiment of the present invention, the image training unit 320 is specifically configured to use the hairstyle contour image and the face pose image as input data of a preset neural network, use the hairstyle contour image corresponding to the face pose image as target data of the preset neural network, and train the preset neural network to obtain a hairstyle replacement model.
In one embodiment of the present invention, the image training unit 320 further includes:
and the training subunit is used for inputting the hairstyle contour image and the human face posture image into the preset neural network to obtain output data of the preset neural network.
And the calculating unit is used for calculating the loss function value of the preset neural network by using the output data and the target data.
And the parameter adjusting unit is used for adjusting the network parameters of the preset neural network based on the loss function value if the loss function value does not accord with the preset standard until the loss function value accords with the preset standard.
In an embodiment of the invention, the calculating unit is specifically configured to calculate function values corresponding to the plurality of loss functions based on the output data and the target data; and carrying out weighted summation on the function values corresponding to the plurality of loss functions to obtain the loss function value of the preset neural network.
In one embodiment of the invention, the plurality of loss functions includes at least two of the following: l1Loss, VGG Loss, and SSIM Loss.
In one embodiment of the present invention, the predetermined neural network may be a neural network composed of GAN and VGG networks.
Fig. 4 is a block diagram illustrating an exemplary hardware architecture of a computing device capable of implementing a neural network-based person hair style replacement method and apparatus in accordance with embodiments of the present invention.
As shown in fig. 4, computing device 400 includes an input device 401, an input interface 402, a central processor 403, a memory 404, an output interface 405, and an output device 406. The input interface 402, the central processing unit 403, the memory 404, and the output interface 405 are connected to each other through a bus 410, and the input device 401 and the output device 406 are connected to the bus 410 through the input interface 402 and the output interface 405, respectively, and further connected to other components of the computing device 400.
Specifically, the input device 401 receives input information from the outside and transmits the input information to the central processor 403 through the input interface 402; the central processor 403 processes the input information based on computer-executable instructions stored in the memory 404 to generate output information, stores the output information temporarily or permanently in the memory 404, and then transmits the output information to the output device 406 through the output interface 405; output device 406 outputs the output information outside of computing device 400 for use by a user.
That is, the computing device shown in fig. 4 may also be implemented with a neural network-based device for person hair style replacement, which may include: a memory storing computer-executable instructions; and a processor which, when executing computer executable instructions, may implement the neural network-based person hair style replacement method and apparatus described in connection with fig. 1-3.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium has computer program instructions stored thereon; the computer program instructions are executed by a processor to realize the human hair style replacement method based on the neural network provided by the embodiment of the invention.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention. The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. For example, the algorithms described in the specific embodiments may be modified without departing from the basic spirit of the invention. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (11)
1. A person hair style replacing method based on a neural network is characterized by comprising the following steps:
extracting a hairstyle outline image of a person and a face posture image of the person;
training a preset neural network by using the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model;
generating a hairstyle outline image of the replaced person corresponding to the human face pose of the to-be-replaced person by utilizing the hairstyle replacing model according to the hairstyle outline image of the to-be-replaced person and the human face pose image of the to-be-replaced person;
and replacing the hairstyle of the person to be replaced by utilizing the hairstyle outline image of the person to be replaced corresponding to the human face posture of the person to be replaced.
2. The method for replacing the hairstyle of the person based on the neural network as claimed in claim 1, wherein the replacing the hairstyle of the person to be replaced with the hairstyle outline image of the person to be replaced corresponding to the face pose of the person to be replaced comprises:
editing the hairstyle contour image of the replaced person corresponding to the face posture of the person to be replaced by using a face photo graffiti editing SC-FEGAN to obtain a hairstyle image with hairstyle texture characteristics;
and replacing the hair style of the person to be replaced by utilizing the hair style image.
3. The method for replacing the hairstyle of the person based on the neural network as claimed in claim 1, wherein the extracting of the hairstyle outline image of the person and the face pose image of the person comprises:
determining feature extraction key points;
extracting key points based on the features, and extracting a human face posture image of the person;
and processing the hair of the person by using a hair segmentation model to obtain a hair style outline image of the person.
4. The method for replacing a human hair style based on a neural network as claimed in claim 1, wherein the training of a preset neural network by using the hair style outline image and the human face pose image to obtain the hair style replacement model comprises:
and taking the hairstyle contour image and the face posture image as input data of the preset neural network, taking the hairstyle contour image corresponding to the face posture image as target data of the preset neural network, and training the preset neural network to obtain the hairstyle replacement model.
5. The method for replacing a hairstyle of a human being based on a neural network as claimed in claim 4, wherein the training of the preset neural network comprises:
inputting the hairstyle contour image and the face posture image into the preset neural network to obtain output data of the preset neural network;
calculating a loss function value of the preset neural network by using the output data and the target data;
if the loss function value does not meet the preset standard, adjusting the network parameters of the preset neural network based on the loss function value, and continuing to input the hairstyle contour image and the face posture image into the preset neural network to obtain the output data of the preset neural network until the loss function value meets the preset standard.
6. The method for replacing a hairstyle of a human figure based on a neural network as claimed in claim 5, wherein said calculating a loss function value of the preset neural network by using the output data and the target data comprises:
calculating function values corresponding to a plurality of loss functions based on the output data and the target data;
and carrying out weighted summation on the function values corresponding to the plurality of loss functions to obtain the loss function value of the preset neural network.
7. The neural network-based person hair style replacement method of claim 6, wherein the plurality of loss functions comprises at least two of:
a first order norm Loss function L1Loss, a visual geometry group Loss function VGG Loss, and a structural similarity Loss function SSIM Loss.
8. The method for replacing a hairstyle of a human figure based on a neural network as claimed in claim 1, wherein the presetting of the neural network comprises: generating a countermeasure network GAN and a visual geometry group VGG network.
9. A figure hair style replacing device based on a neural network is characterized by comprising:
an image extraction unit, which is used for extracting a hairstyle outline image of a person and a face posture image of the person;
the image training unit is used for training a preset neural network by utilizing the hairstyle contour image and the human face posture image to obtain a hairstyle replacement model;
the image generation unit is used for generating a hair style outline image of the replaced person corresponding to the human face posture of the person to be replaced by utilizing the hair style replacement model according to the hair style outline image of the replaced person and the human face posture image of the person to be replaced;
and the image replacing unit is used for replacing the hairstyle of the person to be replaced by utilizing the hairstyle outline image of the person to be replaced corresponding to the human face posture of the person to be replaced.
10. A neural network-based person hair style replacement device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the neural network-based person hair style replacement method of any one of claims 1-8.
11. A computer storage medium having computer program instructions stored thereon, which when executed by a processor, implement the neural network-based person hair style replacement method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910528062.2A CN112102149A (en) | 2019-06-18 | 2019-06-18 | Figure hair style replacing method, device, equipment and medium based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910528062.2A CN112102149A (en) | 2019-06-18 | 2019-06-18 | Figure hair style replacing method, device, equipment and medium based on neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112102149A true CN112102149A (en) | 2020-12-18 |
Family
ID=73748892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910528062.2A Pending CN112102149A (en) | 2019-06-18 | 2019-06-18 | Figure hair style replacing method, device, equipment and medium based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102149A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113709370A (en) * | 2021-08-26 | 2021-11-26 | 维沃移动通信有限公司 | Image generation method and device, electronic equipment and readable storage medium |
CN114187633A (en) * | 2021-12-07 | 2022-03-15 | 北京百度网讯科技有限公司 | Image processing method and device, and training method and device of image generation model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451950A (en) * | 2016-05-30 | 2017-12-08 | 北京旷视科技有限公司 | Face image synthesis method, human face recognition model training method and related device |
CN107527318A (en) * | 2017-07-17 | 2017-12-29 | 复旦大学 | A kind of hair style replacing options based on generation confrontation type network model |
-
2019
- 2019-06-18 CN CN201910528062.2A patent/CN112102149A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451950A (en) * | 2016-05-30 | 2017-12-08 | 北京旷视科技有限公司 | Face image synthesis method, human face recognition model training method and related device |
CN107527318A (en) * | 2017-07-17 | 2017-12-29 | 复旦大学 | A kind of hair style replacing options based on generation confrontation type network model |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113709370A (en) * | 2021-08-26 | 2021-11-26 | 维沃移动通信有限公司 | Image generation method and device, electronic equipment and readable storage medium |
CN114187633A (en) * | 2021-12-07 | 2022-03-15 | 北京百度网讯科技有限公司 | Image processing method and device, and training method and device of image generation model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
Nhan Duong et al. | Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition | |
Jancsary et al. | Loss-specific training of non-parametric image restoration models: A new state of the art | |
CN107194371B (en) | User concentration degree identification method and system based on hierarchical convolutional neural network | |
CN111833236B (en) | Method and device for generating three-dimensional face model for simulating user | |
CN113095333B (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN110322398B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN108109121A (en) | A kind of face based on convolutional neural networks obscures quick removing method | |
CN113705290A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN110648289A (en) | Image denoising processing method and device | |
CN112102149A (en) | Figure hair style replacing method, device, equipment and medium based on neural network | |
Al-Nima et al. | Regenerating face images from multi-spectral palm images using multiple fusion methods | |
CN114863499B (en) | Finger vein and palm vein identification method based on federal learning | |
CN111445426A (en) | Target garment image processing method based on generation countermeasure network model | |
CN112102148A (en) | Figure hair style replacing method, device, equipment and medium based on neural network | |
Gauerhof et al. | Reverse variational autoencoder for visual attribute manipulation and anomaly detection | |
CN109190505A (en) | The image-recognizing method that view-based access control model understands | |
CN105389573B (en) | A kind of face identification method based on three value mode layering manufactures of part | |
CN116665319A (en) | Multi-mode biological feature recognition method based on federal learning | |
CN111860045A (en) | Face changing method, device and equipment and computer storage medium | |
CN114821681A (en) | Fingerprint augmentation method | |
CN110889373B (en) | Block chain-based identity recognition method, information storage method and related device | |
CN114004974A (en) | Method and device for optimizing images shot in low-light environment | |
CN115688075A (en) | Identity verification method and device based on face recognition | |
CN110163824B (en) | Face portrait synthesis method based on bionics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |