CN108564127A - Image conversion method, device, computer equipment and storage medium - Google Patents

Image conversion method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108564127A
CN108564127A CN201810354082.8A CN201810354082A CN108564127A CN 108564127 A CN108564127 A CN 108564127A CN 201810354082 A CN201810354082 A CN 201810354082A CN 108564127 A CN108564127 A CN 108564127A
Authority
CN
China
Prior art keywords
image
style
face
transformation model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810354082.8A
Other languages
Chinese (zh)
Other versions
CN108564127B (en
Inventor
李旻骏
黄浩智
马林
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810354082.8A priority Critical patent/CN108564127B/en
Publication of CN108564127A publication Critical patent/CN108564127A/en
Application granted granted Critical
Publication of CN108564127B publication Critical patent/CN108564127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This application involves a kind of image conversion method, this method includes:Currently pending image is obtained, the currently pending image includes face;The currently pending image is inputted into the target face image transformation model trained, the target face image transformation model trained is used to the face in input picture being converted to the second style by the first style, and the target face image transformation model trained is trained to obtain by raw facial image transformation model with network model is differentiated;Obtain the target style face image of the target face image transformation model output.For the image conversion method without being labeled to training sample, the at low cost and accuracy of training is high.In addition, it is also proposed that a kind of image conversion apparatus, computer equipment and storage medium.

Description

Image conversion method, device, computer equipment and storage medium
Technical field
This application involves computer processing technical fields, are set more particularly to a kind of image conversion method, device, computer Standby and storage medium.
Background technology
Image conversion is that image is converted into another style from a kind of style.It is traditional by face image from a kind of style The model for being converted to another style needs using there is the training method of supervision to train to obtain, that is, to need that phase is arranged for training sample The label answered.And the acquisition of label is often highly difficult, and training result dependent on setting label, if label it is not accurate enough or Training sample with label is insufficient, can all influence last training result, i.e., not only trained of high cost, and training result It is often not accurate enough.
Invention content
Based on this, it is necessary in view of the above-mentioned problems, proposing image conversion method a kind of at low cost and high accuracy, dress It sets, computer equipment and storage medium.
A kind of image conversion method, the method includes:
Currently pending image is obtained, the currently pending image includes face;
The currently pending image is inputted into the target face image transformation model trained, the target trained Face image transformation model is used to the face in input picture being converted to the second style, the mesh trained by the first style Mark face image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated;
Obtain the target style face image of the target face image transformation model output.
A kind of image conversion apparatus, described device include:
Acquisition module, for obtaining currently pending image, the currently pending image includes face;
Input module, for the currently pending image to be inputted the target face image transformation model trained, institute The target face image transformation model trained is stated to be used to the face in input picture being converted to the second style by the first style, The target face image transformation model trained is to be carried out by raw facial image transformation model with network model is differentiated What training obtained;
Output module, the target style face image for obtaining the target face image transformation model output.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating When machine program is executed by the processor so that the processor executes following steps:
Currently pending image is obtained, the currently pending image includes face;
The currently pending image is inputted into the target face image transformation model trained, the target trained Face image transformation model is used to the face in input picture being converted to the second style, the mesh trained by the first style Mark face image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated;
Obtain the target style face image of the target face image transformation model output.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor executes following steps:
Currently pending image is obtained, the currently pending image includes face;
The currently pending image is inputted into the target face image transformation model trained, the target trained Face image transformation model is used to the face in input picture being converted to the second style, the mesh trained by the first style Mark face image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated;
Obtain the target style face image of the target face image transformation model output.
Above-mentioned image conversion method, device, computer equipment and storage medium, by that will include the currently pending of face Image inputs the target face image transformation model trained, and the target face image transformation model trained will be for that will input figure The face of picture is converted to the second style by the first style, and the target face image transformation model trained is by raw facial figure As transformation model and differentiate network model be trained to obtain, obtain target face image transformation model output target style Face image.Above-mentioned target face image transformation model is using raw facial image transformation model and to differentiate that network model carries out It is trained, in this process, as a result of discrimination model as submodel come to raw facial image transformation model into Row training, completion can be trained without label, i.e., no longer needs manually to be labeled, greatly reduces and be trained to This, and improve trained accuracy.
Description of the drawings
Fig. 1 is the applied environment figure of image conversion method in one embodiment;
Fig. 2 is the flow chart of image conversion method in one embodiment;
Fig. 3 is the schematic diagram that facial image is converted in one embodiment;
Fig. 4 is the principle schematic of image transformation model training process in one embodiment;
Fig. 5 is the training flow chart of target face image transformation model in one embodiment;
Fig. 6 is the principle schematic of image transformation model training process in another embodiment;
Fig. 7 is the flow chart that image is obtained in one embodiment;
Fig. 8 is the flow chart that pending image is inputted to target face image transformation model in one embodiment;
Fig. 9 A are the structure chart that image is converted in one embodiment;
Fig. 9 B are the schematic diagram that image is converted in one embodiment;
Figure 10 is the flow chart of image conversion method in another embodiment;
Figure 11 is the structure diagram of image conversion apparatus in one embodiment;
Figure 12 is the structure diagram of training module in one embodiment;
Figure 13 is the structure diagram of training module in another embodiment;
Figure 14 is the structure diagram of image conversion apparatus in another embodiment;
Figure 15 is the structure diagram of image conversion apparatus in another embodiment;
Figure 16 is the structure diagram of one embodiment Computer equipment.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
Fig. 1 is the applied environment figure of image conversion method in one embodiment.Referring to Fig.1, the image conversion method application In image converter system.The image converter system includes terminal 110 and server 120, and terminal 110 and server 120 pass through net Network connects, and terminal 110 can be specifically terminal console or mobile terminal, and mobile terminal can be specifically mobile phone, tablet computer, pen Remember at least one of this computer etc..Server 120 can use the service of the either multiple server compositions of independent server Device cluster is realized.Terminal 110 sends currently pending image to server 120, and server 120 obtains currently pending figure Picture, currently pending image include face, then currently pending image is inputted to the target face image modulus of conversion trained Type, the target face image transformation model trained are used to the face in input picture being converted to the second wind by the first style Lattice, the target face image transformation model trained are instructed by raw facial image transformation model and differentiation network model It gets, obtains the target style face image of target face image transformation model output, target style face image is returned Return to corresponding terminal 110.
In another embodiment, above-mentioned image conversion method may be directly applied to terminal 110.The acquisition of terminal 110 is worked as Preceding pending image, currently pending image include face, and then terminal 110 has trained the currently pending image input Target face image transformation model, the target face image transformation model trained is used for the face in input picture by the One style is converted to the second style, the target face image transformation model trained be by raw facial image transformation model with Differentiate what network model was trained, finally, terminal 110 obtains the target of the target face image transformation model output Style face image.
As shown in Fig. 2, in one embodiment, providing a kind of image conversion method.Both can also may be used with application server For terminal, the present embodiment is to be applied to terminal illustration.The image conversion method specifically comprises the following steps:
Step S202, obtains currently pending image, and currently pending image includes face.
Wherein, currently pending image refers to current image to be converted, which includes face, face It can be the face of people, can also be the face of animal.Currently pending image can be terminal by calling camera direct Shooting include face image, can also be the image that selected from photograph album stored includes face.
Currently pending image is inputted the target face image transformation model trained, the mesh trained by step S204 Mark face image transformation model is used to the face in input picture being converted to the second style by the first style, the target trained Face image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated.
Wherein, target face image transformation model is used to the face in input picture being converted to the second wind by the first style Lattice.Feature possessed by the image of different-style is different, and the first style, the second style belong to the different-style of image.Wherein, scheme The style of picture includes:Cartoon style, true style, sketch style, cartoon style, Quadratic Finite Element style etc..That is target face image Transformation model is used to image being converted to another style from a kind of style.Target face image transformation model is by original face What portion's image transformation model and differentiation network model were trained.In one embodiment, raw facial image modulus of conversion Type and differentiation network model are to use unsupervised training algorithm, are obtained by confrontation learning training.Wherein, unsupervised training is calculated Method refers to the training method it is not necessary that label is arranged for training sample;Confrontation study refers to two models by the way of fighting mutually Learnt, by setting a confrontation penalty values, the target of one of model training is so that the confrontation penalty values are minimum Change, the target of another model training is so that the confrontation penalty values maximize.The two this confrontation study by way of not The parameter in respective model is adjusted disconnectedly, until reaching the condition of convergence.By by raw facial image transformation model with differentiate net Network model, which fight learning to realize to train in the case of no label, has obtained target face image transformation model.
In one embodiment, training pattern includes:One image transformation model and a differentiation network model.Training set Including two image collections, one is the first style image set, and one is the second style image set, wherein the first style Image in image collection and the second style image set all includes face.Under type such as may be used to train to obtain:First, The first style image is obtained from the first style image set as current first style image, from the second style image set The second style image is obtained as current second style image.Secondly, by current first style image input picture transformation model, The the first output image for obtaining output is obtained using the first output image and the second style image as the input for differentiating network model The first probability corresponding with the first output image for taking output, obtains the second probability corresponding with the second style image of output. Finally, penalty values are fought according to the first probability and the second probability calculation, then according to confrontation penalty values to image transformation model and Differentiate that the weight parameter in network model is adjusted, wherein be adjusted to the parameter of image transformation model so as to damage-retardation Mistake value minimizes, and is maximized to differentiating that the parameter in network model is adjusted so as to fight penalty values.Later, current the is updated One style image, current second style image repeat the above process to image transformation model and differentiate the weight in network model Parameter is adjusted, and is recycled successively, until reaching the corresponding condition of convergence, using the image transformation model after the completion of training as mesh Logo image transformation model.
Step S206 obtains the target style face image of target face image transformation model output.
Wherein, target style face image refers to having target wind by what target face image transformation model was converted to The image of lattice face.Assuming that target style is the animation style of Quadratic Finite Element, then what is obtained is exactly the face with Quadratic Finite Element style Portion's image.As shown in figure 3, will to include that the image of face is converted to the face with Quadratic Finite Element style and schemes in one embodiment The schematic diagram of picture.
Above-mentioned image conversion method inputs the target face figure trained by the currently pending image that will include face As transformation model, the target face image transformation model trained by the first style by the face of input picture for being converted to the Two styles, the target face image transformation model trained be by raw facial image transformation model with differentiate network model into What row obtained, obtain the target style face image of target face image transformation model output.Above-mentioned target face image conversion Model is using raw facial image transformation model and differentiates that network model is trained, in this process, due to using Discrimination model as submodel is trained raw facial image transformation model, without label Training is completed, i.e., no longer needs manually to be labeled, greatly reduce trained cost, and improve trained accuracy.
As shown in figure 4, in one embodiment, raw facial image transformation model includes positive face image transformation model With reversed face image transformation model, differentiate that network model includes the to be connected with the output of forward direction face image transformation model One differentiates network model, and second to be connected with the output of reversed face image transformation model differentiates network model;Positive face Portion's image transformation model is used to the face in input picture being converted to the second style, reversed face image conversion by the first style Model is used to the face in input picture being converted to the first style by the second style;First differentiation network model is defeated for calculating Enter the status information that image belongs to the second style, the second differentiation network model belongs to the shape of the first style for calculating input image State information.
Wherein, the effect of positive face image transformation model is that the face of input picture is converted to second by the first style Style.The effect of reversed face image transformation model is that the face of input picture is converted to the first style by the second style.For Positive face image transformation model is trained for target face image transformation model, needs three submodels, one of them Submodel is exactly reversed face image transformation model, and there are two differentiate network model.Wherein, the first differentiation network model is It is connected with the output of positive face image transformation model, the second differentiation network model is and reversed face image transformation model Output be connected, that is to say, that by the output of positive face image transformation model as the defeated of the first differentiation network model Enter, the input that the output of reversed face image transformation model is differentiated into network model as second.In addition it is also necessary to by the second wind The input that table images differentiate network model as first, the input that the first style image is differentiated into network model as second. In one embodiment, as shown in figure 4, positive face image transformation model, reversed face image transformation model, the first differentiation network Model, second differentiate network model principle schematic in the training process, and in figure, x indicates the first style image, and y indicates the Two style images, G (x) indicate that the output image obtained by positive face image transformation model, F (y) indicate to pass through reversed face The output image that portion's image transformation model obtains.
First differentiation network model belongs to the status information of the second style for the face of calculating input image, and second differentiates Network model belongs to the status information of the first style for the face of calculating input image, and status information can be probabilistic information, It can also be weight score information etc..First differentiate network model for identification input picture face which be true second The image of style, which is the image exported by positive face image transformation model.Training first differentiates the purpose of network model It is by true second style image of input to be judged as that very, the image exported by positive face image transformation model being judged as It is false.If the image of positive face image transformation model output is enough the first differentiation network model of out-tricking, illustrate positive face's figure As the image of transformation model output is provided with the feature of the second style image so that the first differentiation network model None- identified goes out very It is pseudo-.Similarly, the second image for inputting for identification of differentiation network model which be true first style image, which be by The image of reversed face image transformation model output, the purpose that training second differentiates network model is true first wind that will be inputted Table images are judged as very, and the image exported by reversed face image transformation model is judged as vacation.If reversed face image turns The image of mold changing type output is enough the second differentiation network model of out-tricking, and illustrates the image tool of reversed face image transformation model output There is the feature of the first style image so that the second differentiation network model None- identified goes out the true and false.In one embodiment, positive Face image transformation model, reversed face image transformation model, the first differentiation network model and the second differentiation network model are all It is obtained using convolutional neural networks model training.
As shown in figure 5, in one embodiment, the training step of target face image transformation model includes:
Step S502 obtains training dataset, and training dataset includes the first style image collection and the second style image collection, Each image that first style image collection and the second style image are concentrated all includes face.
Wherein, training dataset refers to the set that the data used are needed to model training.Since training objective face schemes As the purpose of transformation model is that face is converted to the second style by the first style.So training dataset includes two set, One is the first style image collection, and one is the second style image collection, and the first style image collection and the second style image are concentrated Each image all include face.
In one embodiment, after getting training dataset, the image concentrated to training data pre-processes, pre- to locate Reason includes carrying out data cleansing and data enhancing to the first style image and the second style image, and data cleansing refers to by training set In those faces imperfect (for example, only side face), light are inadequate etc. may interfere with trained sample and deleted.Data Enhancing refers to that the face in training image is more prominent, including:By the background pure color in image, by face in training image In picture center etc..
Step S504 obtains current first style image according to the first style image collection, is obtained according to the second style image collection Take current second style image.
Wherein, before to model training, first, the first style image is obtained as current according to the first style image collection First style image obtains the second style image as current second style image according to the second style image collection.In a reality It applies in example, will directly can concentrate the first style image that get as current first style image from the first style image, Concentrate the second style image got as current second style image using from the second style image.In another embodiment In, the first style image being directly obtained will be concentrated to be further processed from the first style image, including identify the first wind The position of face in table images extracts face's figure of the first style according to the face location recognized from the first style image Picture, then using the face image of the first style as current first style image.Similarly, it will be concentrated from the second style image straight It obtains the second style image got to be further processed, using the face image for the second style extracted as current second Style image.
Current first style image is passed through positive face image transformation model, the first discrimination model by step S506 successively Obtain the first probability of corresponding output.
Wherein, using current first style image as the input of positive face image transformation model, positive face's figure is obtained As the image that transformation model exports, referred to as " the first output image ", using the first output image as the defeated of the first discrimination model Enter, then gets the first probability of output.First probability refers to judging that the obtain first output image belongs to the second style figure The probability of picture.
Current second style image is passed through reversed face image transformation model, the second discrimination model by step S508 successively Obtain the second probability of corresponding output.
Wherein, using current second style image as the input of reversed face image transformation model, reversed face's figure is obtained As the image that transformation model exports, in order to distinguish, referred to as " the second output image ", differentiate using the second output image as second Then the input of model gets the second probability of output.Second probability refers to judging that the obtain second output image belongs to the The probability of one style image.
Current first style image is inputted the second discrimination model and obtains the third probability of corresponding output by step S510, will Current second style image inputs the first discrimination model and handles to obtain the 4th probability of corresponding output.
Wherein, the first discrimination model is for judging that input picture belongs to the probability of the second style.Second discrimination model is used for Judge that input picture belongs to the probability of the first style.Current first style image is inputted into the second discrimination model and obtains corresponding output The third probability for belonging to the first style, by current second style image input the first discrimination model obtain output belong to the second wind 4th probability of lattice.
Step S512 obtains confrontation penalty values according to the first probability, the second probability, third probability and the 4th probability calculation.
Wherein, the first probability, the 4th probability are that is be calculated belong to the probability of the second style, and the second probability, third are general Rate is that is be calculated belong to the probability of the first style.The first confrontation loss is obtained according to the first probability and the 4th probability calculation Value.The second confrontation penalty values are obtained according to the second probability and third probability calculation.According to the first confrontation penalty values and the second confrontation Total confrontation penalty values are calculated in penalty values, in one embodiment, directly by the first confrontation penalty values and second pair of damage-retardation The sum of mistake value is as total confrontation penalty values.
Step S514, according to confrontation penalty values to positive face image transformation model, reversed face image transformation model, the Parameter in one discrimination model and the second discrimination model is adjusted.
Wherein, due to the target of the first discrimination model of training be will be exported by positive face image transformation model it is first defeated It is false to go out image recognition, and current second style image is identified as very, i.e., for the first discrimination model, output with first It is the smaller the better to export corresponding first probability of image, the 4th probability corresponding with current second style image of output it is more big more It is good.And it is that the first style image is converted to the second style image to train the target of positive face image transformation model, i.e., for Positive face image transformation model, the first probability intentionally got are the bigger the better, and the first probability is bigger, illustrate by positive face The image that portion's image transformation model obtains more meets the second style image feature.For the second discrimination model, The second probability intentionally got is the smaller the better, and third probability is the bigger the better.For reversed face image transformation model, wish The second obtained probability is hoped to be the bigger the better.
Confrontation penalty values are positively correlated with third probability, the 4th probability, with the first probability and the second probability at inverse correlation.Institute It is so that fighting for positive face image transformation model, for reversed face image transformation model, to adjust weight parameter Penalty values are adjusted towards the direction of reduction.For the first discrimination model and the second discrimination model, weight parameter is adjusted, is So that confrontation penalty values are adjusted towards increased direction.
In one embodiment, form can be expressed as by fighting the calculation formula of penalty values:
Ladv=log (Dy(y))+log(1-Dy(y′))+log(Dx(x))+log(1-Dx(x′))
Wherein, x indicates that current first style image, y indicate current second style image.X ' expressions y passes through reversed face The image (the second output image) of image transformation model output, y ' are images of the x by positive face image transformation model output (the first output image).Dx(x) refer to probability that the first style image is exported by the second discrimination model, Dx(x ') refers to second The probability that output image is exported by the second discrimination model.Dy(y) indicate that current second style image passes through the first discrimination model The probability of output, Dy(y ') indicates the probability that the first output image is exported by the first discrimination model.LadvWhat expression was calculated Fight penalty values.According to the confrontation penalty values L being calculatedadvTo positive face image transformation model, the conversion of reversed face image Parameter in model, the first discrimination model and the second discrimination model is adjusted, positive face image conversion, reversed face image The purpose of transformation model adjusting parameter is so that confrontation penalty values LadvIt minimizes, the first discrimination model and the second discrimination model tune The target of whole parameter is so that confrontation penalty values LadvIt maximizes.Based on positive face image conversion, reversed face image conversion with Confrontation between first discrimination model and the second discrimination model learns to be adjusted to the parameter in model, in no label In the case of, so that it may positive face image transformation model is trained with realizing.In one embodiment, it is obtaining to damage-retardation After mistake value, the parameter in each model is adjusted using gradient descent method, in order to improve trained stability, Ke Yi Gradient penalty term (gradient penalty) is added when being adjusted to the parameter in model.
Step S516 updates current first style image and current second style image, returns current first style figure As handling to obtain corresponding first probability by positive face image transformation model, the first discrimination model successively, recycle according to this, directly To the condition of convergence is met, the positive face image transformation model that training completion is obtained is as target face image transformation model.
Wherein, when according to confrontation penalty values to positive face image transformation model, reversed face image transformation model, first After parameter in discrimination model and the second discrimination model is adjusted, current first style image and current second style figure are updated Picture updates current training sample data, then use the new training sample data to positive face image transformation model, instead Continue to train to face image transformation model, the first discrimination model, the second discrimination model, trained mode is same as above, and is returned Current first style image is handled to obtain corresponding first by positive face image transformation model, the first discrimination model successively Current second style image is handled and is corresponded to by reversed face image transformation model, the second discrimination model successively by probability The second probability the step of, recycled according to such mode, until meeting the condition of convergence, each model training is completed, In, the condition of convergence self-defined can be arranged, for example, can by viewing export image it is whether substantially unchanged, if substantially without Variation explanation has reached convergence, or whether the first probability of viewing the first discrimination model output is in stochastic regime, i.e., can not Identify that the first output image is true or false, because the first output image is provided with the feature of the second style completely.It will be trained The positive face image arrived is as target face image transformation model, for the face image of the first style to be converted to the second wind The face image of lattice.
In one embodiment, positive face image transformation model, reversed face image are converted according to confrontation penalty values After parameter in model, the first discrimination model and the second discrimination model is adjusted, further include:By current first style image The first output image by the output of positive face image transformation model is as the reversed face image transformation model after adjustment Input obtains the third output image of output;By current second style image by the output of reversed face image transformation model Input of the second output image as the positive face image transformation model after adjustment obtains the 4th output image of output;Meter Calculate the first difference value, current second style image and the 4th output figure between current first style image and third output image The second difference value as between;Circulation loss value is calculated according to the first difference value and the second difference value;According to circulation loss Value adjusts the parameter in positive face image transformation model and reversed image transformation model again.
Wherein, referred to as " first is defeated for the image by current first style image by positive face image transformation model output Go out image ", using the first output image as the input of the reversed face image transformation model after adjustment, the third for obtaining output is defeated Go out image.Image by current second style image by the output of reversed face image transformation model is known as " the second output figure Picture " obtains the 4th output figure of output using the second output image as the input of the positive face image transformation model after adjustment Picture.First difference value refers to that current first style image passes through positive face image modulus of conversion with current first style image successively Penalty values between the third output image that type, reversed face image transformation model obtain.Second difference value refers to current second Style image is obtained by reversed face image transformation model, positive face image transformation model successively with current second style image Penalty values between the 4th output image arrived.Circulation loss value is calculated according to the first difference value and the second difference value, In, the first difference value and the second difference value are positively correlated with circulation loss value.In one embodiment, directly by the first difference value It is used as circulation loss value with the sum of the second difference value.In one embodiment, the calculation formula of circulation loss value is as follows:
Lcyc=| | F (G (x))-x | |1+||G(F(y))-y||1,
Wherein, x indicates that current first style image, F (G (x)) indicate that current first style image passes through positive face successively The result that portion's transformation model and reversed face's transformation model obtain.It is anti-that G (F (y)) indicates that current second style image is passed through successively The result obtained to face's transformation model and positive face transformation model.||x||1It indicates to carry out the calculating that norm is 1 to x.Such as It is in one embodiment, positive face image transformation model, reversed face image transformation model, first differentiate network shown in Fig. 6 Model, second differentiate the principle schematic of network model in the training process, including regard G (x) as reversed face's transformation model Input, obtain F (G (x)), the input by F (y) as positive face's transformation model obtains G (F (y)).
Since the purpose of the positive face image transformation model of training is to convert the face in input picture from the first style For the second style.The purpose of the reversed face image transformation model of training is that the second style of face in input picture is converted to the One style.In order to ensure to retain the content of script image in image conversion process, it is desirable that by the face in image by first After style is converted to the second style, then after being converted to the first style by the second style, original image can be obtained.So by After cycle, the otherness between image is the smaller the better.According to circulation loss value to positive face image transformation model and reversely The target of parameter adjustment in image transformation model is so that circulation loss value minimizes.By fighting penalty values and circulation loss Value is adjusted the parameter in positive face image transformation model and reversed face image transformation model so that image is being converted During retain original content, only to original Content Transformation style, so that the target that training obtains The target style face image that image transformation model is converted to retains the characteristics of face in original image.
As shown in fig. 7, in one embodiment, current first style image is obtained according to the first style image collection, according to Second style image collection obtains current second style image, including:
Step S504A obtains original first style image from the first style image collection, is obtained from the second style image collection former Begin the second style image.
Wherein, the image that be directly obtained will be concentrated to be known as " original first style image " from the first style image, from the Two style images concentrate the image being directly obtained to be known as " original second style image ".
Step S504B carries out random perturbation to original first style image and original second style image respectively and handles To the second style image of the first style image of adjustment and adjustment, random perturbation processing includes:In translation, scaling, brightness adjustment It is at least one.
Wherein, random perturbation processing include image is translated, is scaled, at least one of brightness adjustment.In order to make Obtaining the sample that training data is concentrated has diversity, after getting original first style image and original second style image, point It is other that random perturbation processing is carried out to original first style image and original second style image.It refers to pair to adjust the first style image Original first style image carries out the image that random perturbation is handled.It refers to original second style to adjust the second style image Image carries out the image that random perturbation is handled.
Step S504C makees the second style image of adjustment using the first style image of adjustment as current first style image For current second style image.
It wherein, will using the first style image of adjustment handled by random perturbation as current first style image The second style image is adjusted as current second style image.The first style image will be adjusted to convert as positive face image The input of model, using the second style image of adjustment as the input of reversed face image transformation model.
In one embodiment, it is obtaining currently pending image, after currently pending image includes face, is also wrapping It includes:Face in currently pending image is identified to obtain facial feature points;According to facial feature points from currently pending Face image is extracted in image, using the face image extracted as currently pending image.
Wherein, face recognition model may be used to the identification of face to be identified, face recognition model may be used Convolutional neural networks model training obtains.Facial feature points include the characteristic point of the profile and face that indicate face.So according to Facial feature points can extract face image from currently pending image, using the face image extracted as currently waiting for Image is handled, i.e., using the face image extracted as the input of target face image transformation model.
In one embodiment, face image is extracted from currently pending image according to facial feature points, including:Root The position of two eyes is determined according to the eye feature point in facial feature points;When the position of two eyes is not on a horizontal line When, it rotates currently pending image and the position of two eyes is on a horizontal line, obtain intermediate image;According to centre Facial feature points in image, which are determined, corresponding with face image cuts out frame;Face is extracted from intermediate image according to frame is cut out Image.
Wherein, the position of eye is calculated according to the coordinate of the eye feature point in facial feature points, judges two eyes If whether the position in portion not on same horizontal line, illustrates that face is tilted, can pass through on same horizontal line Select currently pending image that the position of two eyes is on horizontal line, the image after rotation is known as " in Between image ".Then it is determined according to the characteristic point of the face mask in facial feature points in intermediate image corresponding with face image Frame is cut out, face image is extracted from intermediate image according to frame is cut out.
In one embodiment, the first style refers to the true style of the image directly taken using photographic device;The Two styles refer to virtual animation style.
Wherein, the first style image can be the image of the true style directly taken by photographic device.For example, will Face is shot using photographic device (for example, camera), obtained facial image is the image of the first style.Second Style image is the image of virtual animation style, for example, the image of Quadratic Finite Element personage's style.
As shown in figure 8, in one embodiment, target face transformation model includes down-sampling convolutional layer, residual error convolutional layer With up-sampling convolutional layer;
Pending image is inputted to the target face image transformation model trained, including:
Step S204A, using pending image as the input of down-sampling convolutional layer, down-sampling convolutional layer is used for pending Face image in image carries out down-sampling convolutional calculation and obtains the first face feature matrix.
Wherein, down-sampling convolutional layer is used to carry out down-sampling convolutional calculation to the face image in pending image.Under adopt It is that image is converted into low-resolution image by high-definition picture with the effect of convolutional layer.I.e. down-sampling convolutional calculation obtains First face feature matrix is the image of a low resolution, and the image by obtaining a low resolution advantageously reduces subsequently The operand of convolution.
Step S204B, using the first face feature matrix as the input of residual error convolutional layer, residual error convolutional layer is used for first Face feature matrix carries out residual error convolution algorithm and is converted to the second face feature matrix with target style and features.
Wherein, using the first obtained face feature matrix as the input of residual error convolutional layer, residual error convolutional layer is used for the One face feature matrix carries out residual error convolution algorithm, is converted to the second face feature matrix with target style and features.Residual error The main function of convolutional layer is to be converted to the feature of face in the pending image of input with the second style by the first style Feature.Residual error convolutional layer can be realized by multiple convolutional layers, for example, the neural network of two layers of convolutional layer composition may be used Layer is used as residual error convolutional layer, and the input of part residual error convolutional layer is directly appended to export, may insure previous network layer in this way Input data information directly act on subsequent network layer so that the corresponding deviation diminution for exporting and being originally inputted.
Step S204C, using the second face feature matrix as up-sampling convolutional layer input, up-sampling convolutional layer for pair Second face feature matrix carries out up-sampling convolutional calculation and obtains target style face image.
Wherein, the second face feature matrix that up-sampling convolutional layer is used to obtain carries out up-sampling convolutional calculation and obtains mesh Mark style face image.The effect for up-sampling convolutional layer is that the image that will be obtained is converted to high-resolution from low resolution.Due to Down-sampling convolution algorithm has been carried out to image before, so the resolution ratio of image reduces, has more clearly been exported in order to obtain Image needs image obtaining high-resolution image by up-sampling by low resolution.
In one embodiment, target face image transformation model includes 2 layers of down-sampling (Downsample) convolutional layer, 6 Layer residual error (Residual) convolutional layer and 2 layers of up-sampling (Upsample) convolutional layer.It, can also be residual for transformation quality Dropout layers are added in poor (Residual) convolutional layer, Dropout layers refer to removing some neurons, for example, at random according to general Rate 0.5 removes 50% neuron, can play the role of preventing over-fitting.In one embodiment, target face image turns The network structure of mold changing type is as shown in table 1:
Table 1
Channel type Convolution kernel Spacer-frame number Step-length Convolution nuclear volume Activation primitive
Convolutional layer 7X7 3 1 128 ReLu
Down-sampling convolutional layer 3X3 1 2 256 ReLu
Down-sampling convolutional layer 3X3 1 2 256 ReLu
Residual error convolutional layer * 6 3X3 1 1 256 ReLu
Up-sample convolutional layer 3X3 1 1 512 ReLu
Up-sample convolutional layer 3X3 1 1 256 ReLu
Up-sample convolutional layer 7X7 3 1 3 tanh
In one embodiment, differentiate that network model is also to be obtained using convolutional neural networks model training, accordingly Network structure is as shown in table 2, includes 6 layers of convolutional layer, wherein does not include activation primitive in last layer of convolutional layer:
Table 2
Channel type Convolution kernel Spacer-frame number Step-length Convolution nuclear volume Activation primitive
Convolutional layer 3X3 1 2 64 Leaky ReLu
Convolutional layer 3X3 1 2 128 Leaky ReLu
Convolutional layer 3X3 1 2 256 Leaky ReLu
Convolutional layer 3X3 1 2 512 Leaky ReLu
Convolutional layer 3X3 1 1 512 Leaky ReLu
Convolutional layer 3X3 1 1 1 _
In one embodiment, up-sampling convolutional layer is sub-pix convolutional layer;Sub-pix convolutional layer is used for low resolution Image be converted to high-resolution image.
Wherein, high-resolution image in order to obtain, up-sampling convolutional layer using sub-pix (subpixel) convolutional layer come It realizes.Low-resolution image effectively can be converted to the relatively high high-definition picture of clarity by sub-pix convolutional layer.
In one embodiment, image conversion method further includes:Using target style face image as the image trained The input of super-resolution processing model obtains output treated high-resolution target style face image, image oversubscription Resolution processing model trains to obtain using convolutional neural networks algorithm.
Wherein, in order to obtain clearer target style face image, oversubscription is carried out to target style face image Resolution handles to obtain the high target style face image of clarity.Specifically, using target style face image as with trained The input of Image Super Resolution Processing model obtains output treated high-resolution target style face image, image Super-resolution model may be used convolutional neural networks algorithm and train to obtain.
As shown in Figure 9 A, in one embodiment, including three models, respectively human face recognition model, target face figure As transformation model and super-resolution processing model.Wherein, the people in the pending image that human face recognition model inputs for identification Face characteristic point, the human face characteristic point then obtained according to identification facial image from being extracted in pending image, then by face Image inputs target face image transformation model, and target face image transformation model is used to face being converted to the from the first style Two styles obtain the target style facial image of target face transformation model output, then input target style facial image Super-resolution processing model, super-resolution processing model are used to target style facial image being converted to high-resolution image. If Fig. 9 B are in one embodiment, using above three model to including that the image of face is identified, converts and carries out The schematic diagram of super-resolution processing.
In one embodiment, currently pending image is obtained, currently pending image includes face, including:Obtain mesh Mark video;Obtain target video in include face target video frame, will include face target video frame as currently Pending image.
Wherein, above-mentioned image conversion method can be applied and be converted in real time with to the face in video, specifically, be obtained Target video is taken, then includes the target video frame of face in acquisition target video, video is by video frame one by one It constitutes, each video frame corresponds to a video image, so by will include that the target video frame of face is used as and is currently waited for The face image that the face in video is converted to target style can be realized in processing image.
As shown in Figure 10, in one embodiment it is proposed that a kind of image conversion method, includes the following steps:
Step S1001, obtains currently pending image, and currently pending image includes face;
Step S1002 is identified to obtain facial feature points to the face in currently pending image;
Step S1003 determines the position of two eyes according to the eye feature point in facial feature points;
Step S1004 judges the position of two eyes whether on a horizontal line, if so, by currently pending figure As being used as intermediate image, S1006 is entered step, if it is not, then entering step S1005.
Step S1005 rotates currently pending image and the position of two eyes is on a horizontal line, in obtaining Between image;
Step S1006 cuts out frame according to facial feature points determination is corresponding with face image in intermediate image;
Step S1007 extracts face image according to frame is cut out from intermediate image;
Face image is inputted the target face image transformation model trained, the target face trained by step S1008 Image transformation model includes:Down-sampling convolutional layer, residual error convolutional layer and up-sampling convolutional layer, down-sampling convolutional layer are used for face Image carries out down-sampling convolutional calculation and obtains the first face feature matrix, residual error convolutional layer be used for the first face feature matrix into Row residual error convolution algorithm is converted to the second face feature matrix with target style and features, and up-sampling convolutional layer is used for second Face feature matrix carries out up-sampling convolutional calculation and obtains target style face image.
Step S1009 obtains the target style face image of target face image transformation model output.
Step S1010 is obtained using target style face image as the input for the Image Super Resolution Processing model trained Output is taken treated high-resolution target style face image, Image Super Resolution Processing model is to use convolutional Neural What network algorithm was trained.
It should be understood that although each step in the flow chart of Fig. 2 to 10 is shown successively according to the instruction of arrow, Be these steps it is not that the inevitable sequence indicated according to arrow executes successively.Unless expressly stating otherwise herein, these steps There is no stringent sequences to limit for rapid execution, these steps can execute in other order.Moreover, in Fig. 2 to 10 extremely Few a part of step may include that either these sub-steps of multiple stages or stage are not necessarily same to multiple sub-steps Moment executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage is also not necessarily It carries out successively, but can either the sub-step of other steps or at least part in stage in turn or are handed over other steps Alternately execute.
As shown in figure 11, in one embodiment it is proposed that a kind of image conversion apparatus, the device include:
Acquisition module 1102, for obtaining currently pending image, the currently pending image includes face;
Input module 1104, for the currently pending image to be inputted the target face image modulus of conversion trained Type, the target face image transformation model trained are used to the face in input picture being converted to second by the first style Style, the target face image transformation model trained are by raw facial image transformation model and differentiation network model It is trained;
Output module 1106, the target style face image for obtaining the target face image transformation model output.
In one embodiment, the raw facial image transformation model includes positive face image transformation model and reversed Face image transformation model, it is described to differentiate that network model includes first to be connected with the output of positive face image transformation model Differentiate network model, and second to be connected with the output of reversed face image transformation model differentiates network model;The forward direction Face image transformation model is used to the face in input picture being converted to the second style, reversed face's figure by the first style As transformation model is used to the face in input picture being converted to first style by second style;Described first differentiates Network model belongs to the status information of second style for calculating input image, and described second differentiates network model based on Calculate the status information that input picture belongs to first style.
As shown in figure 12, in one embodiment, above-mentioned image conversion apparatus further includes training module 1101, training module 1101 include:
Training data acquisition module 1101A, for obtaining training dataset, the training dataset includes the first style figure Each image that image set and the second style image collection, the first style image collection and the second style image are concentrated all includes to have the face Portion;
Image collection module 1101B, for obtaining current first style image according to the first style image collection, according to The second style image collection obtains current second style image;
First probability output module 1101C, for current first style image to be passed through to positive face image successively Transformation model, the first discrimination model obtain the first probability of corresponding output;
Second probability output module 1101D, for current second style image to be passed through reversed face image successively Transformation model, the second discrimination model obtain the second probability of corresponding output;
Differentiate output module 1101E, corresponding output is obtained for current first style image to be inputted the second discrimination model Third probability, current second style image is inputted into the 4th probability that the first discrimination model handles to obtain corresponding output;
Costing bio disturbance module 1101F, based on according to first probability, the second probability, third probability and the 4th probability Calculation obtains confrontation penalty values;
The first adjustment module 1101G, for according to the confrontation penalty values to the positive face image transformation model, instead Parameter into face image transformation model, the first discrimination model and the second discrimination model is adjusted;
Update module 1101H is returned for updating current first style image and current second style image by institute Current first style image is stated to handle to obtain corresponding first by positive face image transformation model, the first discrimination model successively Probability recycles according to this, and until meeting the condition of convergence, the positive face image transformation model that training completion is obtained is as the mesh Mark face image transformation model.
As shown in figure 13, in one embodiment, above-mentioned training module 1101 further includes:
First image output module 1101I, for converting current first style image by positive face image Input of the first output image of model output as the reversed face image transformation model after adjusting, obtains the of output Three output images;
Second image output module 1101J, for converting current second style image by reversed face image Input of the second output image of model output as the forward direction face image transformation model after adjustment obtains the of output Four output images;
Difference value computing module 1101K, for calculating between current first style image and third output image The second difference value between first difference value, current second style image and the 4th output image;
Circulation loss computing module 1101L, for being calculated according to first difference value and second difference value Circulation loss value;
Second adjustment module 1101M, for according to the circulation loss value to the positive face image transformation model and Parameter in reversed image transformation model is adjusted again.
In one embodiment, described image acquisition module 1101B is additionally operable to obtain from the first style image collection former Begin the first style image, original second style image is obtained from the second style image collection, respectively to original first wind Table images and original second style image carry out random perturbation processing and are adjusted the second wind of the first style image and adjustment Table images, the random perturbation processing include:At least one of translation, scaling, brightness adjustment, by the first style of the adjustment Image is as current first style image, using the second style image of the adjustment as current second style image.
As shown in figure 14, in one embodiment, above-mentioned image conversion apparatus further includes:
Identification module 1108, for being identified to obtain facial feature points to the face in the currently pending image;
Extraction module 1110, for extracting face's figure from the currently pending image according to the facial feature points Picture, using the face image extracted as currently pending image.
In one embodiment, extraction module 1110 is additionally operable to be determined according to the eye feature point in the facial feature points The position of two eyes rotates the currently pending image and makes when the position of two eyes is not on a horizontal line The position of two eyes is on a horizontal line, is obtained intermediate image, is determined according to facial feature points in the intermediate image It is corresponding with face image to cut out frame, it cuts out frame according to described from the intermediate image and extracts face image.
In one embodiment, first style refers to the true wind of the image directly taken using photographic device Lattice;Second style refers to virtual animation style.
In one embodiment, the target face transformation model include down-sampling convolutional layer, residual error convolutional layer and on adopt Sample convolutional layer;The input module is additionally operable to using the pending image as the input of the down-sampling convolutional layer, under described Sampling convolutional layer is used to carry out down-sampling convolutional calculation to the face image in the pending image to obtain the first face feature Matrix;Using the first face feature matrix as the input of the residual error convolutional layer, the residual error convolutional layer is used for described First face feature matrix carries out residual error convolution algorithm and is converted to the second face feature matrix with target style and features;By institute Input of the second face feature matrix as the up-sampling convolutional layer is stated, the up-sampling convolutional layer is used for second face Portion's eigenmatrix carries out up-sampling convolutional calculation and obtains target style face image.
In one embodiment, the up-sampling convolutional layer is sub-pix convolutional layer;The sub-pix convolutional layer is used for will The image of low resolution is converted to high-resolution image.
As shown in figure 15, in one embodiment, above-mentioned image conversion apparatus further includes:
Super-resolution processing module 1112, for using the target style face image as the Image Super-resolution trained Rate handles the input of model, obtains output treated high-resolution target style face image, described image super-resolution Rate processing model trains to obtain using convolutional neural networks algorithm.
In one embodiment, the acquisition module is additionally operable to obtain target video, obtains in the target video and includes The target video frame for having face, using the target video frame for including face as the currently pending image.
Figure 16 shows the internal structure chart of one embodiment Computer equipment.The computer equipment can be specifically eventually End, can also be server.As shown in figure 16, which includes processor, the memory connected by system bus And network interface.Wherein, memory includes non-volatile memory medium and built-in storage.The non-volatile of the computer equipment is deposited Storage media is stored with operating system, can also be stored with computer program, when which is executed by processor, may make place It manages device and realizes image conversion method.Also computer program can be stored in the built-in storage, which is held by processor When row, processor may make to execute image conversion method.It will be understood by those skilled in the art that structure shown in Figure 16, only Only be with the block diagram of the relevant part-structure of application scheme, do not constitute the computer being applied thereon to application scheme The restriction of equipment, specific computer equipment may include than more or fewer components as shown in the figure, or the certain portions of combination Part, or arranged with different components.
In one embodiment, image conversion method provided by the present application can be implemented as a kind of shape of computer program Formula, computer program can be run on computer equipment as shown in figure 16.Composition can be stored in the memory of computer equipment Each program module of the image conversion apparatus, for example, the acquisition module 1102 of Figure 11, input module 1104 and output module 1106.The computer program that each program module is constituted makes processor execute each implementation of the application described in this specification Step in the image conversion apparatus of example.For example, computer equipment shown in Figure 16 can be turned by image as shown in figure 11 The acquisition module 1102 of changing device obtains currently pending image, and the currently pending image includes face;By inputting mould The currently pending image is inputted the target face image transformation model trained, the target face trained by block 1104 Portion's image transformation model is used to the face in input picture being converted to the second style, the target trained by the first style Face image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated;By defeated Go out the target style face image that module 1106 obtains the target face image transformation model output.
In one embodiment it is proposed that a kind of computer equipment, including memory and processor, the memory storage There is computer program, when the computer program is executed by the processor so that the processor executes following steps:It obtains Currently pending image, the currently pending image includes face;The currently pending image is inputted into the mesh trained Mark face image transformation model, the target face image transformation model trained for by the face in input picture by the One style is converted to the second style, and the target face image transformation model trained is by raw facial image modulus of conversion Type is trained with differentiation network model;Obtain the target style face of the target face image transformation model output Image.
In one embodiment, the raw facial image transformation model includes positive face image transformation model and reversed Face image transformation model, it is described to differentiate that network model includes first to be connected with the output of positive face image transformation model Differentiate network model, and second to be connected with the output of reversed face image transformation model differentiates network model;The forward direction Face image transformation model is used to the face in input picture being converted to the second style, reversed face's figure by the first style As transformation model is used to the face in input picture being converted to first style by second style;Described first differentiates Network model belongs to the status information of second style for calculating input image, and described second differentiates network model based on Calculate the status information that input picture belongs to first style.
In one embodiment, the processor is additionally operable to execute following steps:Obtain training dataset, the trained number Include the first style image collection and the second style image collection according to collection, the first style image collection and the second style image concentration Each image all includes face;Current first style image is obtained according to the first style image collection, according to described second Style image collection obtains current second style image;Current first style image is converted by positive face image successively Model, the first discrimination model obtain the first probability of corresponding output;Current second style image is passed through into reversed face successively Portion's image transformation model, the second discrimination model obtain the second probability of corresponding output;Current first style image is inputted second Discrimination model obtains the third probability of corresponding output, and current second style image the first discrimination model of input is handled and is corresponded to 4th probability of output;Confrontation loss is obtained according to first probability, the second probability, third probability and the 4th probability calculation Value;The positive face image transformation model, reversed face image transformation model, first are differentiated according to the confrontation penalty values Parameter in model and the second discrimination model is adjusted;Update current first style image and current second style figure Picture returns and handles current first style image by positive face image transformation model, the first discrimination model successively It to corresponding first probability, recycles according to this, until meeting the condition of convergence, the positive face image modulus of conversion for completing to obtain will be trained Type is as the target face image transformation model.
In one embodiment, described according to the confrontation penalty values to positive face image transformation model, reversed face After parameter in portion's image transformation model, the first discrimination model and the second discrimination model is adjusted, the processor is also used In execution following steps:The first output figure by current first style image by positive face image transformation model output As the input as the reversed face image transformation model after adjustment, the third output image of output is obtained;Work as by described in Preceding second style image by reversed face image transformation model export second output image as adjust after the forward direction The input of face image transformation model obtains the 4th output image of output;Calculate current first style image and third Export the second difference value between the first difference value, current second style image and the 4th output image between image;According to Circulation loss value is calculated in first difference value and second difference value;According to the circulation loss value to the forward direction Parameter in face image transformation model and reversed image transformation model is adjusted again.
In one embodiment, current first style image is obtained according to the first style image collection, according to described the Two style image collection obtain current second style image, including:Original first style figure is obtained from the first style image collection Picture obtains original second style image from the second style image collection;Respectively to original first style image and described Original second style image carries out random perturbation processing and is adjusted the second style image of the first style image and adjustment, it is described with Machine disturbance treatment includes:At least one of translation, scaling, brightness adjustment;Using the first style image of the adjustment as described in Current first style image, using the second style image of the adjustment as current second style image.
In one embodiment, obtain currently pending image described, the currently pending image include face it Afterwards, the processor is additionally operable to execute following steps:Face in the currently pending image is identified to obtain face Characteristic point;Face image is extracted from the currently pending image according to the facial feature points, described in extracting Face image is as currently pending image.
In one embodiment, described to extract face from the currently pending image according to the facial feature points Image, including:The position of two eyes is determined according to the eye feature point in the facial feature points;When the position of two eyes Not when on a horizontal line, rotates the currently pending image and the position of two eyes is on a horizontal line, Obtain intermediate image;Frame is cut out according to the facial feature points determination in the intermediate image is corresponding with face image;According to institute It states and cuts out frame and extract face image from the intermediate image.
In one embodiment, first style refers to the true wind of the image directly taken using photographic device Lattice, second style refer to virtual animation style.
In one embodiment, the target face transformation model include down-sampling convolutional layer, residual error convolutional layer and on adopt Sample convolutional layer;It is described that the pending image is inputted into the target face image transformation model trained, including:It waits locating by described Input of the image as the down-sampling convolutional layer is managed, the down-sampling convolutional layer is used for the face in the pending image Image carries out down-sampling convolutional calculation and obtains the first face feature matrix;Using the first face feature matrix as the residual error The input of convolutional layer, the residual error convolutional layer are used to carry out residual error convolution algorithm to the first face feature matrix to be converted to tool There is the second face feature matrix of target style and features;Using the second face feature matrix as the up-sampling convolutional layer Input, the up-sampling convolutional layer are used to carry out up-sampling convolutional calculation to the second face feature matrix to obtain target style Face image.
In one embodiment, the up-sampling convolutional layer is sub-pix convolutional layer;The sub-pix convolutional layer is used for will The image of low resolution is converted to high-resolution image.
In one embodiment, the processor is additionally operable to execute following steps:The target style face image is made For the input for the Image Super Resolution Processing model trained, output is obtained treated high-resolution target style face Image, described image super-resolution processing model train to obtain using convolutional neural networks algorithm.
In one embodiment, described to obtain currently pending image, the currently pending image includes face, packet It includes:Obtain target video;Obtain in the target video include face target video frame, by it is described include face mesh Video frame is marked as the currently pending image.
In one embodiment it is proposed that a kind of computer readable storage medium, is stored with computer program, the calculating When machine program is executed by processor so that the processor executes following steps:Currently pending image is obtained, it is described currently to wait for It includes face to handle image;The currently pending image inputs to the target face image transformation model trained, it is described Trained target face image transformation model is used to the face in input picture being converted to the second style by the first style, described The target face image transformation model trained is to be trained by raw facial image transformation model with network model is differentiated It obtains;Obtain the target style face image of the target face image transformation model output.
In one embodiment, the raw facial image transformation model includes positive face image transformation model and reversed Face image transformation model, it is described to differentiate that network model includes first to be connected with the output of positive face image transformation model Differentiate network model, and second to be connected with the output of reversed face image transformation model differentiates network model;The forward direction Face image transformation model is used to the face in input picture being converted to the second style, reversed face's figure by the first style As transformation model is used to the face in input picture being converted to first style by second style;Described first differentiates Network model belongs to the status information of second style for calculating input image, and described second differentiates network model based on Calculate the status information that input picture belongs to first style.
In one embodiment, the processor is additionally operable to execute following steps:Obtain training dataset, the trained number Include the first style image collection and the second style image collection according to collection, the first style image collection and the second style image concentration Each image all includes face;Current first style image is obtained according to the first style image collection, according to described second Style image collection obtains current second style image;Current first style image is converted by positive face image successively Model, the first discrimination model obtain the first probability of corresponding output;Current second style image is passed through into reversed face successively Portion's image transformation model, the second discrimination model obtain the second probability of corresponding output;Current first style image is inputted second Discrimination model obtains the third probability of corresponding output, and current second style image the first discrimination model of input is handled and is corresponded to 4th probability of output;Confrontation loss is obtained according to first probability, the second probability, third probability and the 4th probability calculation Value;The positive face image transformation model, reversed face image transformation model, first are differentiated according to the confrontation penalty values Parameter in model and the second discrimination model is adjusted;Update current first style image and current second style figure Picture returns and handles current first style image by positive face image transformation model, the first discrimination model successively It to corresponding first probability, recycles according to this, until meeting the condition of convergence, the positive face image modulus of conversion for completing to obtain will be trained Type is as the target face image transformation model.
In one embodiment, described according to the confrontation penalty values to positive face image transformation model, reversed face After parameter in portion's image transformation model, the first discrimination model and the second discrimination model is adjusted, the processor is also used In execution following steps:The first output figure by current first style image by positive face image transformation model output As the input as the reversed face image transformation model after adjustment, the third output image of output is obtained;Work as by described in Preceding second style image by reversed face image transformation model export second output image as adjust after the forward direction The input of face image transformation model obtains the 4th output image of output;Calculate current first style image and third Export the second difference value between the first difference value, current second style image and the 4th output image between image;According to Circulation loss value is calculated in first difference value and second difference value;According to the circulation loss value to the forward direction Parameter in face image transformation model and reversed image transformation model is adjusted again.
In one embodiment, current first style image is obtained according to the first style image collection, according to described the Two style image collection obtain current second style image, including:Original first style figure is obtained from the first style image collection Picture obtains original second style image from the second style image collection;Respectively to original first style image and described Original second style image carries out random perturbation processing and is adjusted the second style image of the first style image and adjustment, it is described with Machine disturbance treatment includes:At least one of translation, scaling, brightness adjustment;Using the first style image of the adjustment as described in Current first style image, using the second style image of the adjustment as current second style image.
In one embodiment, obtain currently pending image described, the currently pending image include face it Afterwards, the processor is additionally operable to execute following steps:Face in the currently pending image is identified to obtain face Characteristic point;Face image is extracted from the currently pending image according to the facial feature points, described in extracting Face image is as currently pending image.
In one embodiment, described to extract face from the currently pending image according to the facial feature points Image, including:The position of two eyes is determined according to the eye feature point in the facial feature points;When the position of two eyes Not when on a horizontal line, rotates the currently pending image and the position of two eyes is on a horizontal line, Obtain intermediate image;Frame is cut out according to the facial feature points determination in the intermediate image is corresponding with face image;According to institute It states and cuts out frame and extract face image from the intermediate image.
In one embodiment, first style refers to the true wind of the image directly taken using photographic device Lattice, second style refer to virtual animation style.
In one embodiment, the target face transformation model include down-sampling convolutional layer, residual error convolutional layer and on adopt Sample convolutional layer;It is described that the pending image is inputted into the target face image transformation model trained, including:It waits locating by described Input of the image as the down-sampling convolutional layer is managed, the down-sampling convolutional layer is used for the face in the pending image Image carries out down-sampling convolutional calculation and obtains the first face feature matrix;Using the first face feature matrix as the residual error The input of convolutional layer, the residual error convolutional layer are used to carry out residual error convolution algorithm to the first face feature matrix to be converted to tool There is the second face feature matrix of target style and features;Using the second face feature matrix as the up-sampling convolutional layer Input, the up-sampling convolutional layer are used to carry out up-sampling convolutional calculation to the second face feature matrix to obtain target style Face image.
In one embodiment, the up-sampling convolutional layer is sub-pix convolutional layer;The sub-pix convolutional layer is used for will The image of low resolution is converted to high-resolution image.
In one embodiment, the processor is additionally operable to execute following steps:The target style face image is made For the input for the Image Super Resolution Processing model trained, output is obtained treated high-resolution target style face Image, described image super-resolution processing model train to obtain using convolutional neural networks algorithm.
In one embodiment, described to obtain currently pending image, the currently pending image includes face, packet It includes:Obtain target video;Obtain in the target video include face target video frame, by it is described include face mesh Video frame is marked as the currently pending image.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above example can be combined arbitrarily, to keep description succinct, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield is all considered to be the range of this specification record.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Cannot the limitation to the application the scope of the claims therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the guarantor of the application Protect range.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (15)

1. a kind of image conversion method, the method includes:
Currently pending image is obtained, the currently pending image includes face;
The currently pending image is inputted into the target face image transformation model trained, the target face trained Image transformation model is used to the face in input picture being converted to the second style, the target face trained by the first style Portion's image transformation model is trained to obtain by raw facial image transformation model with network model is differentiated;
Obtain the target style face image of the target face image transformation model output.
2. according to the method described in claim 1, it is characterized in that, the raw facial image transformation model includes positive face Image transformation model and reversed face image transformation model, the differentiation network model include and positive face image transformation model Output be connected first differentiate network model, and be connected with the output of reversed face image transformation model second differentiation Network model;
The forward direction face image transformation model is used to the face in input picture being converted to the second style, institute by the first style Reversed face image transformation model is stated for the face in input picture to be converted to first style by second style;
The first differentiation network model belongs to the status information of second style for calculating input image, and described second sentences Other network model belongs to the status information of first style for calculating input image.
3. according to the method described in claim 2, it is characterized in that, the training step packet of the target face image transformation model It includes:
Obtain training dataset, the training dataset include the first style image collection and the second style image collection, described first Each image that style image collection and the second style image are concentrated all includes face;
Current first style image is obtained according to the first style image collection, is obtained according to the second style image collection current Second style image;
Current first style image is obtained by positive face image transformation model, the first discrimination model successively corresponding defeated The first probability gone out;
Current second style image is obtained by reversed face image transformation model, the second discrimination model successively corresponding defeated The second probability gone out;
Current first style image is inputted into the second discrimination model and obtains the third probability of corresponding output, by current second style figure As the first discrimination model of input handles to obtain the 4th probability of corresponding output;
Confrontation penalty values are obtained according to first probability, the second probability, third probability and the 4th probability calculation;
The positive face image transformation model, reversed face image transformation model, first are sentenced according to the confrontation penalty values Parameter in other model and the second discrimination model is adjusted;
Current first style image and current second style image are updated, is returned current first style image successively It handles to obtain corresponding first probability by positive face image transformation model, the first discrimination model, recycle according to this, until meeting The condition of convergence, the positive face image transformation model that training completion is obtained is as the target face image transformation model.
4. according to the method described in claim 3, it is characterized in that, it is described according to the confrontation penalty values to positive face image After parameter in transformation model, reversed face image transformation model, the first discrimination model and the second discrimination model is adjusted, Further include:
The first output image using current first style image by positive face image transformation model output is as adjustment The input of the reversed face image transformation model afterwards obtains the third output image of output;
The second output image using current second style image by the output of reversed face image transformation model is as adjustment The input of the positive face image transformation model afterwards obtains the 4th output image of output;
Calculate current first style image and third output image between the first difference value, current second style image and The second difference value between 4th output image;
Circulation loss value is calculated according to first difference value and second difference value;
According to the circulation loss value to the parameter in the positive face image transformation model and reversed image transformation model into Row adjusts again.
5. according to the method described in claim 3, it is characterized in that, obtaining current first wind according to the first style image collection Table images obtain current second style image according to the second style image collection, including:
Original first style image is obtained from the first style image collection, original second is obtained from the second style image collection Style image;
Random perturbation processing is carried out to original first style image and original second style image respectively to be adjusted The second style image of first style image and adjustment, the random perturbation processing include:In translation, scaling, brightness adjustment extremely Few one kind;
Using the first style image of the adjustment as current first style image, using the second style image of the adjustment as Current second style image.
6. described currently to wait for according to the method described in claim 1, it is characterized in that, obtain currently pending image described After processing image includes face, further include:
Face in the currently pending image is identified to obtain facial feature points;
Face image is extracted from the currently pending image according to the facial feature points, the face that will be extracted Image is as currently pending image.
7. according to the method described in claim 6, it is characterized in that, it is described according to the facial feature points from it is described it is current wait for from Face image is extracted in reason image, including:
The position of two eyes is determined according to the eye feature point in the facial feature points;
When the position of two eyes is not on a horizontal line, the position that the currently pending image makes two eyes is rotated It sets on a horizontal line, obtains intermediate image;
Frame is cut out according to the facial feature points determination in the intermediate image is corresponding with face image;
It cuts out frame according to described from the intermediate image and extracts face image.
8. according to the method described in claim 1, it is characterized in that, first style refers to directly being shot using photographic device The true style of the image arrived, second style refer to virtual animation style.
9. according to the method described in claim 1, it is characterized in that, the target face transformation model includes down-sampling convolution Layer, residual error convolutional layer and up-sampling convolutional layer;
It is described that the pending image is inputted into the target face image transformation model trained, including:
Using the pending image as the input of the down-sampling convolutional layer, the down-sampling convolutional layer to described for waiting locating Face image in reason image carries out down-sampling convolutional calculation and obtains the first face feature matrix;
Using the first face feature matrix as the input of the residual error convolutional layer, the residual error convolutional layer is used for described the One face feature matrix carries out residual error convolution algorithm and is converted to the second face feature matrix with target style and features;
Using the second face feature matrix as the input of the up-sampling convolutional layer, the up-sampling convolutional layer is used for institute State the second face feature matrix carry out up-sampling convolutional calculation obtain target style face image.
10. according to the method described in claim 9, it is characterized in that, the up-sampling convolutional layer is sub-pix convolutional layer;
The sub-pix convolutional layer is used to the image of low resolution being converted to high-resolution image.
11. according to the method described in claim 1, it is characterized in that, the method further includes:
Using the target style face image as the input for the Image Super Resolution Processing model trained, the place of output is obtained High-resolution target style face image after reason, described image super-resolution processing model are calculated using convolutional neural networks What method was trained.
12. according to the method described in claim 1, it is characterized in that, the currently pending image of acquisition, described currently to wait locating It includes face to manage image, including:
Obtain target video;
Obtain in the target video include face target video frame, using it is described include face target video frame as The currently pending image.
13. a kind of image conversion apparatus, described device include:
Acquisition module, for obtaining currently pending image, the currently pending image includes face;
Input module, for the currently pending image to be inputted the target face image transformation model trained, it is described Trained target face image transformation model is used to the face in input picture being converted to the second style by the first style, described The target face image transformation model trained is to be trained by raw facial image transformation model with network model is differentiated It obtains;
Output module, the target style face image for obtaining the target face image transformation model output.
14. a kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor is executed such as the step of any one of claim 1 to 12 the method.
15. a kind of computer equipment, including memory and processor, the memory is stored with computer program, the calculating When machine program is executed by the processor so that the processor is executed such as any one of claim 1 to 12 the method Step.
CN201810354082.8A 2018-04-19 2018-04-19 Image conversion method, image conversion device, computer equipment and storage medium Active CN108564127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810354082.8A CN108564127B (en) 2018-04-19 2018-04-19 Image conversion method, image conversion device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810354082.8A CN108564127B (en) 2018-04-19 2018-04-19 Image conversion method, image conversion device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108564127A true CN108564127A (en) 2018-09-21
CN108564127B CN108564127B (en) 2022-02-18

Family

ID=63535876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810354082.8A Active CN108564127B (en) 2018-04-19 2018-04-19 Image conversion method, image conversion device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108564127B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146825A (en) * 2018-10-12 2019-01-04 深圳美图创新科技有限公司 Photography style conversion method, device and readable storage medium storing program for executing
CN109474851A (en) * 2018-10-30 2019-03-15 百度在线网络技术(北京)有限公司 Video conversion method, device and equipment
CN109472270A (en) * 2018-10-31 2019-03-15 京东方科技集团股份有限公司 Image style conversion method, device and equipment
CN109741244A (en) * 2018-12-27 2019-05-10 广州小狗机器人技术有限公司 Picture Generation Method and device, storage medium and electronic equipment
CN109833025A (en) * 2019-03-29 2019-06-04 广州视源电子科技股份有限公司 A kind of method for detecting abnormality of retina, device, equipment and storage medium
CN110097086A (en) * 2019-04-03 2019-08-06 平安科技(深圳)有限公司 Image generates model training method, image generating method, device, equipment and storage medium
CN110148081A (en) * 2019-03-25 2019-08-20 腾讯科技(深圳)有限公司 Training method, image processing method, device and the storage medium of image processing model
CN110223230A (en) * 2019-05-30 2019-09-10 华南理工大学 A kind of more front end depth image super-resolution systems and its data processing method
CN110232722A (en) * 2019-06-13 2019-09-13 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110232401A (en) * 2019-05-05 2019-09-13 平安科技(深圳)有限公司 Lesion judgment method, device, computer equipment based on picture conversion
CN110276399A (en) * 2019-06-24 2019-09-24 厦门美图之家科技有限公司 Image switching network training method, device, computer equipment and storage medium
CN110298326A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of image processing method and device, storage medium and terminal
CN110399924A (en) * 2019-07-26 2019-11-01 北京小米移动软件有限公司 A kind of image processing method, device and medium
CN110516707A (en) * 2019-07-19 2019-11-29 深圳力维智联技术有限公司 A kind of image labeling method and its device, storage medium
CN110619315A (en) * 2019-09-24 2019-12-27 重庆紫光华山智安科技有限公司 Training method and device of face recognition model and electronic equipment
CN110705625A (en) * 2019-09-26 2020-01-17 北京奇艺世纪科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110910332A (en) * 2019-12-03 2020-03-24 苏州科技大学 Dynamic fuzzy processing algorithm of visual SLAM system
CN111064905A (en) * 2018-10-17 2020-04-24 上海交通大学 Video scene conversion method for automatic driving
CN111161132A (en) * 2019-11-15 2020-05-15 上海联影智能医疗科技有限公司 System and method for image style conversion
CN111179172A (en) * 2019-12-24 2020-05-19 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111210382A (en) * 2020-01-03 2020-05-29 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2020108336A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image processing method and apparatus, device, and storage medium
CN111260545A (en) * 2020-01-20 2020-06-09 北京百度网讯科技有限公司 Method and device for generating image
CN111275784A (en) * 2020-01-20 2020-06-12 北京百度网讯科技有限公司 Method and device for generating image
CN111402399A (en) * 2020-03-10 2020-07-10 广州虎牙科技有限公司 Face driving and live broadcasting method and device, electronic equipment and storage medium
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111489287A (en) * 2020-04-10 2020-08-04 腾讯科技(深圳)有限公司 Image conversion method, image conversion device, computer equipment and storage medium
CN111669647A (en) * 2020-06-12 2020-09-15 北京百度网讯科技有限公司 Real-time video processing method, device, equipment and storage medium
CN111696181A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, device and storage medium for generating super meta model and virtual dummy
WO2020187153A1 (en) * 2019-03-21 2020-09-24 腾讯科技(深圳)有限公司 Target detection method, model training method, device, apparatus and storage medium
CN111985281A (en) * 2019-05-24 2020-11-24 内蒙古工业大学 Image generation model generation method and device and image generation method and device
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112465936A (en) * 2020-12-04 2021-03-09 深圳市优必选科技股份有限公司 Portrait cartoon method, device, robot and storage medium
CN112465007A (en) * 2020-11-24 2021-03-09 深圳市优必选科技股份有限公司 Training method of target recognition model, target recognition method and terminal equipment
CN113034449A (en) * 2021-03-11 2021-06-25 深圳市优必选科技股份有限公司 Target detection model training method and device and communication equipment
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113259583A (en) * 2020-02-13 2021-08-13 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN113436062A (en) * 2021-07-28 2021-09-24 北京达佳互联信息技术有限公司 Image style migration method and device, electronic equipment and storage medium
CN113486688A (en) * 2020-05-27 2021-10-08 海信集团有限公司 Face recognition method and intelligent device
CN114025198A (en) * 2021-11-08 2022-02-08 深圳万兴软件有限公司 Video cartoon method, device, equipment and medium based on attention mechanism
CN116912345A (en) * 2023-07-12 2023-10-20 天翼爱音乐文化科技有限公司 Portrait cartoon processing method, device, equipment and storage medium
CN116912345B (en) * 2023-07-12 2024-04-26 天翼爱音乐文化科技有限公司 Portrait cartoon processing method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971615A (en) * 2006-11-10 2007-05-30 中国科学院计算技术研究所 Method for generating cartoon portrait based on photo of human face
CN101272457A (en) * 2007-03-19 2008-09-24 索尼株式会社 Iamge process apparatus and method
WO2014097892A1 (en) * 2012-12-19 2014-06-26 ソニー株式会社 Image processing device, image processing method, and program
EP2993619A1 (en) * 2014-08-28 2016-03-09 Kevin Alan Tussy Facial recognition authentication system including path parameters
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation
CN107730474A (en) * 2017-11-09 2018-02-23 京东方科技集团股份有限公司 Image processing method, processing unit and processing equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971615A (en) * 2006-11-10 2007-05-30 中国科学院计算技术研究所 Method for generating cartoon portrait based on photo of human face
CN101272457A (en) * 2007-03-19 2008-09-24 索尼株式会社 Iamge process apparatus and method
WO2014097892A1 (en) * 2012-12-19 2014-06-26 ソニー株式会社 Image processing device, image processing method, and program
EP2993619A1 (en) * 2014-08-28 2016-03-09 Kevin Alan Tussy Facial recognition authentication system including path parameters
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation
CN107730474A (en) * 2017-11-09 2018-02-23 京东方科技集团股份有限公司 Image processing method, processing unit and processing equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王坤峰等: "平行图像: 图像生成的一个新型理论框架", 《模式识别与人工智能》 *

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146825A (en) * 2018-10-12 2019-01-04 深圳美图创新科技有限公司 Photography style conversion method, device and readable storage medium storing program for executing
CN111064905A (en) * 2018-10-17 2020-04-24 上海交通大学 Video scene conversion method for automatic driving
CN111064905B (en) * 2018-10-17 2021-05-11 上海交通大学 Video scene conversion method for automatic driving
CN109474851A (en) * 2018-10-30 2019-03-15 百度在线网络技术(北京)有限公司 Video conversion method, device and equipment
CN109472270A (en) * 2018-10-31 2019-03-15 京东方科技集团股份有限公司 Image style conversion method, device and equipment
US10970830B2 (en) 2018-10-31 2021-04-06 Boe Technology Group Co., Ltd. Image style conversion method, apparatus and device
CN109472270B (en) * 2018-10-31 2021-09-24 京东方科技集团股份有限公司 Image style conversion method, device and equipment
US11798145B2 (en) 2018-11-30 2023-10-24 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, device, and storage medium
WO2020108336A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image processing method and apparatus, device, and storage medium
CN109741244A (en) * 2018-12-27 2019-05-10 广州小狗机器人技术有限公司 Picture Generation Method and device, storage medium and electronic equipment
CN111489284B (en) * 2019-01-29 2024-02-06 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
US11763541B2 (en) 2019-03-21 2023-09-19 Tencent Technology (Shenzhen) Company Limited Target detection method and apparatus, model training method and apparatus, device, and storage medium
WO2020187153A1 (en) * 2019-03-21 2020-09-24 腾讯科技(深圳)有限公司 Target detection method, model training method, device, apparatus and storage medium
CN110148081A (en) * 2019-03-25 2019-08-20 腾讯科技(深圳)有限公司 Training method, image processing method, device and the storage medium of image processing model
US20210264655A1 (en) * 2019-03-25 2021-08-26 Tencent Technology (Shenzhen) Company Limited Training method and apparatus for image processing model, image processing method and apparatus for image processing model, and storage medium
US11935166B2 (en) * 2019-03-25 2024-03-19 Tencent Technology (Shenzhen) Company Limited Training method and apparatus for image processing model, image processing method and apparatus for image processing model, and storage medium
CN110148081B (en) * 2019-03-25 2024-02-23 腾讯科技(深圳)有限公司 Training method of image processing model, image processing method, device and storage medium
CN109833025A (en) * 2019-03-29 2019-06-04 广州视源电子科技股份有限公司 A kind of method for detecting abnormality of retina, device, equipment and storage medium
WO2020199478A1 (en) * 2019-04-03 2020-10-08 平安科技(深圳)有限公司 Method for training image generation model, image generation method, device and apparatus, and storage medium
CN110097086B (en) * 2019-04-03 2023-07-18 平安科技(深圳)有限公司 Image generation model training method, image generation method, device, equipment and storage medium
CN110097086A (en) * 2019-04-03 2019-08-06 平安科技(深圳)有限公司 Image generates model training method, image generating method, device, equipment and storage medium
CN110232401B (en) * 2019-05-05 2023-08-04 平安科技(深圳)有限公司 Focus judging method, device and computer equipment based on picture conversion
CN110232401A (en) * 2019-05-05 2019-09-13 平安科技(深圳)有限公司 Lesion judgment method, device, computer equipment based on picture conversion
CN111985281B (en) * 2019-05-24 2022-12-09 内蒙古工业大学 Image generation model generation method and device and image generation method and device
CN111985281A (en) * 2019-05-24 2020-11-24 内蒙古工业大学 Image generation model generation method and device and image generation method and device
CN110223230A (en) * 2019-05-30 2019-09-10 华南理工大学 A kind of more front end depth image super-resolution systems and its data processing method
CN110232722B (en) * 2019-06-13 2023-08-04 腾讯科技(深圳)有限公司 Image processing method and device
CN110232722A (en) * 2019-06-13 2019-09-13 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110276399B (en) * 2019-06-24 2021-06-04 厦门美图之家科技有限公司 Image conversion network training method and device, computer equipment and storage medium
CN110276399A (en) * 2019-06-24 2019-09-24 厦门美图之家科技有限公司 Image switching network training method, device, computer equipment and storage medium
CN110298326A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of image processing method and device, storage medium and terminal
CN110516707A (en) * 2019-07-19 2019-11-29 深圳力维智联技术有限公司 A kind of image labeling method and its device, storage medium
CN110399924B (en) * 2019-07-26 2021-09-07 北京小米移动软件有限公司 Image processing method, device and medium
US11120604B2 (en) 2019-07-26 2021-09-14 Beijing Xiaomi Mobile Software Co., Ltd. Image processing method, apparatus, and storage medium
CN110399924A (en) * 2019-07-26 2019-11-01 北京小米移动软件有限公司 A kind of image processing method, device and medium
CN110619315B (en) * 2019-09-24 2020-10-30 重庆紫光华山智安科技有限公司 Training method and device of face recognition model and electronic equipment
CN110619315A (en) * 2019-09-24 2019-12-27 重庆紫光华山智安科技有限公司 Training method and device of face recognition model and electronic equipment
CN110705625A (en) * 2019-09-26 2020-01-17 北京奇艺世纪科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111161132B (en) * 2019-11-15 2024-03-05 上海联影智能医疗科技有限公司 System and method for image style conversion
CN111161132A (en) * 2019-11-15 2020-05-15 上海联影智能医疗科技有限公司 System and method for image style conversion
CN110910332B (en) * 2019-12-03 2023-09-26 苏州科技大学 Visual SLAM system dynamic fuzzy processing method
CN110910332A (en) * 2019-12-03 2020-03-24 苏州科技大学 Dynamic fuzzy processing algorithm of visual SLAM system
CN111179172A (en) * 2019-12-24 2020-05-19 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111179172B (en) * 2019-12-24 2021-11-02 浙江大学 Remote sensing satellite super-resolution implementation method and device based on unmanned aerial vehicle aerial data, electronic equipment and storage medium
CN111210382B (en) * 2020-01-03 2022-09-30 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111210382A (en) * 2020-01-03 2020-05-29 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111275784A (en) * 2020-01-20 2020-06-12 北京百度网讯科技有限公司 Method and device for generating image
CN111260545A (en) * 2020-01-20 2020-06-09 北京百度网讯科技有限公司 Method and device for generating image
CN113259583B (en) * 2020-02-13 2023-05-12 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN113259583A (en) * 2020-02-13 2021-08-13 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN111402399A (en) * 2020-03-10 2020-07-10 广州虎牙科技有限公司 Face driving and live broadcasting method and device, electronic equipment and storage medium
CN111402399B (en) * 2020-03-10 2024-03-05 广州虎牙科技有限公司 Face driving and live broadcasting method and device, electronic equipment and storage medium
CN111489287A (en) * 2020-04-10 2020-08-04 腾讯科技(深圳)有限公司 Image conversion method, image conversion device, computer equipment and storage medium
CN111489287B (en) * 2020-04-10 2024-02-09 腾讯科技(深圳)有限公司 Image conversion method, device, computer equipment and storage medium
CN111696181A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, device and storage medium for generating super meta model and virtual dummy
CN113486688A (en) * 2020-05-27 2021-10-08 海信集团有限公司 Face recognition method and intelligent device
CN111669647A (en) * 2020-06-12 2020-09-15 北京百度网讯科技有限公司 Real-time video processing method, device, equipment and storage medium
CN112465007A (en) * 2020-11-24 2021-03-09 深圳市优必选科技股份有限公司 Training method of target recognition model, target recognition method and terminal equipment
CN112465007B (en) * 2020-11-24 2023-10-13 深圳市优必选科技股份有限公司 Training method of target recognition model, target recognition method and terminal equipment
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112465936A (en) * 2020-12-04 2021-03-09 深圳市优必选科技股份有限公司 Portrait cartoon method, device, robot and storage medium
CN113034449B (en) * 2021-03-11 2023-12-15 深圳市优必选科技股份有限公司 Target detection model training method and device and communication equipment
CN113034449A (en) * 2021-03-11 2021-06-25 深圳市优必选科技股份有限公司 Target detection model training method and device and communication equipment
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113111791B (en) * 2021-04-16 2024-04-09 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113436062A (en) * 2021-07-28 2021-09-24 北京达佳互联信息技术有限公司 Image style migration method and device, electronic equipment and storage medium
CN114025198A (en) * 2021-11-08 2022-02-08 深圳万兴软件有限公司 Video cartoon method, device, equipment and medium based on attention mechanism
CN116912345A (en) * 2023-07-12 2023-10-20 天翼爱音乐文化科技有限公司 Portrait cartoon processing method, device, equipment and storage medium
CN116912345B (en) * 2023-07-12 2024-04-26 天翼爱音乐文化科技有限公司 Portrait cartoon processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108564127B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN108564127A (en) Image conversion method, device, computer equipment and storage medium
CN111767979B (en) Training method, image processing method and image processing device for neural network
CN109978756B (en) Target detection method, system, device, storage medium and computer equipment
Kong et al. Ifrnet: Intermediate feature refine network for efficient frame interpolation
CN111950453B (en) Random shape text recognition method based on selective attention mechanism
CN109583340B (en) Video target detection method based on deep learning
CN110276745B (en) Pathological image detection algorithm based on generation countermeasure network
CN110838108A (en) Medical image-based prediction model construction method, prediction method and device
CN111696110B (en) Scene segmentation method and system
CN112446270A (en) Training method of pedestrian re-identification network, and pedestrian re-identification method and device
CN112446380A (en) Image processing method and device
CN111914997B (en) Method for training neural network, image processing method and device
CN112686898B (en) Automatic radiotherapy target area segmentation method based on self-supervision learning
WO2023030182A1 (en) Image generation method and apparatus
CN113205449A (en) Expression migration model training method and device and expression migration method and device
CN109961397B (en) Image reconstruction method and device
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN114463605B (en) Continuous learning image classification method and device based on deep learning
CN115731597A (en) Automatic segmentation and restoration management platform and method for mask image of face mask
CN114998601A (en) Online update target tracking method and system based on Transformer
CN114821736A (en) Multi-modal face recognition method, device, equipment and medium based on contrast learning
CN111681168B (en) Low-resolution cell super-resolution reconstruction method based on parallel residual error network
CN116342624A (en) Brain tumor image segmentation method combining feature fusion and attention mechanism
CN113627245B (en) CRTS target detection method
CN113674383A (en) Method and device for generating text image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant