CN110619602A - Image generation method and device, electronic equipment and storage medium - Google Patents

Image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110619602A
CN110619602A CN201910913767.6A CN201910913767A CN110619602A CN 110619602 A CN110619602 A CN 110619602A CN 201910913767 A CN201910913767 A CN 201910913767A CN 110619602 A CN110619602 A CN 110619602A
Authority
CN
China
Prior art keywords
person
image
feature vector
target
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910913767.6A
Other languages
Chinese (zh)
Other versions
CN110619602B (en
Inventor
李华夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910913767.6A priority Critical patent/CN110619602B/en
Publication of CN110619602A publication Critical patent/CN110619602A/en
Application granted granted Critical
Publication of CN110619602B publication Critical patent/CN110619602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses an image generation method, an image generation device, an electronic device and a storage medium. The method comprises the following steps: obtaining attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, wherein the people in the original image comprise background people and target people; and obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image. According to the scheme of the embodiment of the disclosure, when a new image is generated based on the feature fusion of an original image, the transition of a new image fusion area is more natural, and the effect is more vivid.

Description

Image generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image generation method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of terminal equipment, the shooting function in the terminal equipment is more and more abundant, and the application program can perform personalized processing on the shooting content to generate a new image when a user uses the terminal equipment to shoot or carry out video call,
at present, when the shot content is personalized, image segmentation processing is usually performed on an image to be processed, and a region to be processed is identified and personalized processing operation is directly performed to generate a new image. For example, in order to generate a new image by performing face-changing processing on two captured images of persons, the conventional technique is to recognize face regions of the two persons by using an image segmentation technique, and then replace the face regions of the two persons to generate a new image. However, when the sizes and/or angles of the face regions of two people in an image are not consistent, a new image is generated by simple region replacement, which seriously affects the beauty of the new image, and improvement is needed.
Disclosure of Invention
The present disclosure provides an image generation method, an image generation apparatus, an electronic device, and a storage medium, which can make a transition of a new image fusion region more natural and an effect more vivid when generating a new image based on feature fusion of an original image.
In a first aspect, an embodiment of the present disclosure provides an image generation method, including:
obtaining attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, wherein the people in the original image comprise background people and target people;
and obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
In a second aspect, an embodiment of the present disclosure further provides an image generating apparatus, including:
the vector acquisition module is used for acquiring attribute feature vectors and portrait feature vectors of people in the original image through the confrontation network model, wherein the people in the original image comprise background people and target people;
and the image generation module is used for obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image generation method as in any embodiment of the present disclosure.
In a fourth aspect, embodiments of the present disclosure provide a readable medium, on which a computer program is stored, which when executed by a processor, implements an image generation method according to any of the embodiments of the present disclosure.
The embodiment of the disclosure provides an image generation method, an image generation device, an electronic device and a storage medium, wherein an attribute feature vector and a portrait feature vector of each character in an original image are obtained through a confrontation network model, and the attribute feature vector of a background character and the portrait feature vector of a target character are selected to generate a target image as the original image comprises the background character and the target character. The character in the target image generated by the scheme of the embodiment of the disclosure is a new character image which is generated by fusing the character in the original image and is different from the character in the original image, and due to the addition of the attribute feature vector, the problem of poor aesthetic degree of the newly generated image caused by different character attribute features (such as the size and the angle of the region to be fused) in the fusion process is avoided, so that the transition of the new image fusion region is more natural, and the effect is more vivid.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1A illustrates a flowchart of an image generation method provided by an embodiment of the present disclosure;
FIG. 1B is a schematic diagram illustrating the structure of an antagonistic network model provided by an embodiment of the present disclosure;
FIG. 1C is a schematic diagram illustrating an image generation process and effect provided by an embodiment of the disclosure;
FIG. 2A shows a flow chart of another image generation method provided by embodiments of the present disclosure;
FIG. 2B is a schematic flow chart illustrating verification of an initial network model provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an image generating apparatus provided in an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise. The names of messages or information exchanged between multiple parties in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1A shows a flowchart of an image generation method provided by the embodiment of the present disclosure, fig. 1B shows a schematic diagram of a structure of an antagonistic network model provided by the embodiment of the present disclosure, and fig. 1C shows a schematic diagram of an image generation process and an image generation effect provided by the embodiment of the present disclosure. The present embodiment is applicable to a case where a new person image is generated from an image including different persons, for example, a case where two different persons are face-changed to generate a new person image different from the two persons. The method may be performed by an image generating apparatus or an electronic device, the apparatus may be implemented by software and/or hardware, and the apparatus may be configured in the electronic device, and specifically may be performed by an image processing process in the electronic device. Optionally, the electronic device may be a device corresponding to a backend service platform of the application program, and may also be a mobile terminal device installed with an application program client.
Optionally, as shown in fig. 1A to 1C, the method in this embodiment may include the following steps:
and S101, obtaining attribute feature vectors and portrait feature vectors of people in the original image through the confrontation network model.
The original image may be a shot image to be processed, and the original image in this embodiment includes at least two persons, where the at least two persons may be located in one original image or located in multiple original images. For example, the original image may be a group image of the user a and the user B, or may be two images, one of which is an image of the user a and the other of which is an image of the user B. Since the new image finally generated by the embodiment of the present disclosure is obtained by fusing at least two people in the original image, the at least two people in the original image in the embodiment can be divided into the background people and the target people. The background person may be a person providing background content during fusion, and the target person may be a person providing key features of the person during fusion. For example, in this embodiment, the five sense organs of the user a in the image are replaced with the five sense organs of the user B to generate a new image, at this time, the user a provides the background content of the newly generated image, so the user a is a background person, and the user B provides the key five sense organ feature of the newly generated image, so the user B is a target person. Optionally, the number of the background persons and the target persons in the original image may be one or more, and each person may be located in one original image or in different original images.
Optionally, the confrontation network model of this embodiment may be a neural network model that performs feature decoupling classification on an input image to obtain an attribute feature vector and a portrait feature vector of a person. Optionally, the confrontation network model of this embodiment may include a portrait decoupling network and a classification network, and the process of obtaining the attribute feature vector and the portrait feature vector of the person in the original image through the confrontation network model in this step may be to obtain the feature vector of the person in the original image through the portrait decoupling network; and obtaining attribute feature vectors and portrait feature vectors of people in the original image through a classification network. The attribute feature vector of the person may be a vector describing some attribute feature of the person in the image, and may include, but is not limited to: at least one of a character expression vector, an orientation angle vector, a character body type vector, and a background feature vector. The portrait feature vector of a person may be a vector describing detailed features of a certain area of the person in the image, and may include, but is not limited to: at least one of a facial feature vector, a hair feature vector, and a limb feature vector.
Specifically, as shown in fig. 1B, the confrontation network model 10 may include a portrait decoupling network 11 and a classification network 12, and the portrait decoupling network 11 of this embodiment further includes an attribute feature decoupling network 13 and a portrait feature decoupling network 14. The attribute feature decoupling network 13 is used for decoupling the attribute feature vectors of the persons from the original images, and the portrait feature decoupling network 14 is used for decoupling the portrait feature vectors of the persons from the original images. However, after the attribute feature decoupling network 13 and the portrait feature decoupling network 14 in the antagonistic network model 10 identify the feature vectors of the corresponding persons, the antagonistic network model 10 cannot know which feature vector is the portrait feature vector and which vector is the attribute feature vector, and in this case, the classification and identification can be performed through the classification network 12. Optionally, the classification network 12 may compare the feature vector of the person in the original image with the standard vector in the classification network to obtain an attribute feature vector and a portrait feature vector of the person in the original image. Specifically, the classification network 12 may store a standard vector in advance, where the standard vector may be a standard vector corresponding to attribute features or a standard vector corresponding to portrait features, and the two feature vectors of a person obtained by decoupling the portrait decoupling network are compared with the standard vector in similarity, where the feature vector more similar to the standard vector in the two feature vectors belongs to a feature category vector with the standard vector. For example, if the standard vector is a standard vector corresponding to the attribute feature, a feature vector more similar to the standard vector in the two feature vectors is determined as an attribute feature vector of the person, and the other feature vector is determined as a portrait feature vector of the person.
Optionally, a kulbeck-leibler divergence loss function constraint is arranged in the portrait decoupling network. For example, the kulbeck-leibler divergence loss function constraint can be set for a feature decoupling network and/or a portrait feature decoupling network in the portrait decoupling network so as to realize the constraint of the decoupled feature vector. Optionally, in the embodiment of the present disclosure, two decoupling networks (i.e., an attribute feature decoupling network and an avatar feature decoupling network) in the avatar decoupling network may be used, one of the two decoupling networks sets a curebeck-leibler divergence loss function constraint to constrain a decoupled feature vector, and the other one allocates the above-described classification network to constrain the decoupled feature vector. For example, a kulbeck-leibler divergence loss function constraint may be set for the attribute feature decoupling network to ensure that the attribute feature vector decoupled by the attribute feature decoupling network satisfies a standard normal distribution as much as possible, thereby eliminating the influence of the portrait feature vector as much as possible to make the attribute feature vector only related to the attribute feature; and (3) adopting a classification network to constrain the portrait feature vectors decoupled by the portrait feature decoupling network so as to make the portrait feature vectors strongly related to the portrait features.
Optionally, in this embodiment, the image input into the confrontation network model may be an acquired image captured by a camera on the electronic device in real time; the images can be stored images selected from a local gallery of the electronic equipment according to the clicking operation of the user; the video stream may be an image corresponding to the video streams of both the acquired videos during the user video. The information may also be acquired in other manners, and this embodiment is not limited. After the original image is obtained, the original image may be input into a countering neural network model, a portrait decoupling network of the countering neural network model decouples a set of feature vectors of each person in the original image, and then for each set of feature vectors, the classification network identifies which vector in the set of feature vectors is an attribute feature vector and which vector is a portrait feature vector based on locally stored standard vectors.
And S102, obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person.
Optionally, in this embodiment, the person in the original image includes a background person and a target person, and no matter the background person or the target person, the S101 obtains the attribute feature vector and the portrait feature vector of the person through the confrontation network model. And for any analyzed attribute feature vector and portrait feature vector, generating a new image, wherein the attribute feature vector of the background person and the portrait feature vector of the target person are adopted for fusion, and the portrait feature of the target person is replaced by the portrait feature of the background person on the basis of the background person image, so that the target image is generated.
Illustratively, as shown in fig. 1C, the original image includes two images, one image being an image of a person a who is a target person; the other image is an image of person B, and the person B is a background person. And respectively inputting the image of the target person A and the image of the background person B into the confrontation network model to obtain the attribute characteristic vector and the portrait characteristic vector of the target person A and the attribute characteristic vector and the portrait characteristic vector of the background person B. Wherein, the portrait feature vector can be the key feature vector of the five sense organs and the hair of the person A and the person B respectively; the attribute feature vector of the character A can be closed mouth expression, eye spirit looking to the right and face angle; the attribute feature vectors of person B may be tooth-exposed smiling expressions, gaze looking left, and a slight right-side face angle. Then, the portrait feature vector of the target person a is fused into the background person B according to the attribute feature vector of the background person B, so as to obtain the target image shown in fig. 1C, it can be seen from the target image that the facial features of the target person a and the smile expression features of the background person B are fused, the eye features of the target person a and the eye features of the background person B are fused to the left, and the overall facial features of the target person a and the slightly right side face angle features of the background person B are fused, so as to generate the target image.
Optionally, in this embodiment of the present disclosure, the attribute feature vector of the background person according to which the target image is generated may be an attribute feature vector of one background person, or may be a mixed attribute feature vector of multiple background persons, for example, the attribute feature vector of the background person may be formed by mixing an expression feature vector of the background person a, an orientation angle vector of the background person B, a body type vector of the background person C, and a feature vector of a background area of the background person D. Similarly, the portrait feature vector of the target person on which the target image is generated may also be a portrait feature vector of one target person, or may also be a mixed portrait feature vector of a plurality of target persons.
It should be noted that, in the present embodiment, the operation of obtaining the target image according to the attribute feature vector of the background person and the portrait feature vector of the target person combines the relevant features of the target person and the background person, and the fusion region is natural in transition and vivid in effect, and the target image effect is not affected by the difference in size and/or angle of the fusion region. Further, the present embodiment generates a target image in which a person is different from a person in an original image. For example, the target image in fig. 1C is neither an image of the target person a nor an image of the background person B.
Optionally, in the embodiment of the present disclosure, the target image generated for the original image may include a new person, for example, the target image generated in fig. 1C includes a new person. For example, if the original image is a group photo of the person C and the person D, the person C and the person D may be background persons and target persons, the person feature vector of the person C and the attribute feature vector of the person D are fused, the person feature vector of the person D and the attribute feature vector of the person C are fused, the synthesized target image is a group photo of the person C and the person D after face changing, and the target image includes two new persons. Therefore, in the present embodiment, the step of the person in the target image being different from the person in the original image includes: the person in the target image is a background person and a person of which the face is changed by the target person; or the person in the target image is a person obtained by fusing the background person and the target person. Specifically, when the target image includes a new person, the person in the target image is a person obtained by fusing the background person and the target person in the original image; when the target image comprises a plurality of new people, the people in the target image are the people in the original image and the target people which are background people, and the plurality of new people are obtained after face changing.
It should be noted that the above-mentioned solution of the embodiment of the present disclosure is described by taking an example of processing picture data to generate a new image, but is not limited to a scene that can be used only for processing pictures, and for example, the method may also be used for inputting video streams of both parties of a call into a countermeasure network model when a user videos, so as to generate a new video stream according to feature vectors of both parties of the call. For example, the method of the embodiment may be adopted to perform face changing operation on both video call parties, so as to improve the interest of the video call.
The embodiment of the disclosure provides an image generation method, which obtains an attribute feature vector and a portrait feature vector of each character in an original image through a confrontation network model, and selects the attribute feature vector of a background character and the portrait feature vector of a target character to generate a target image because the original image comprises the background character and the target character. The character in the target image generated by the scheme of the embodiment of the disclosure is a new character image which is generated by fusing the character in the original image and is different from the character in the original image, and due to the addition of the attribute feature vector, the problem of poor aesthetic degree of the newly generated image caused by different character attribute features (such as the size and the angle of the region to be fused) in the fusion process is avoided, so that the transition of the new image fusion region is more natural, and the effect is more vivid.
Fig. 2A shows a flowchart of another image generation method provided by the embodiment of the present disclosure, and fig. 2B shows a flowchart of verifying an initial network model provided by the embodiment of the present disclosure. The embodiment is optimized on the basis of the alternatives provided by the above embodiment, and specifically gives a detailed process description of how to train the confrontation network model before obtaining the attribute feature vector and the portrait feature vector of the person in the original image by using the confrontation network model.
Optionally, as shown in fig. 2A-2B, the method in this embodiment may include the following steps:
s201, inputting the sample image into the initial network model, and training the initial network model.
The sample image may be training data required for training the initial network model, and may be composed of different character image data and attribute feature vectors and character feature vectors of characters corresponding to the character image data, where the sample image required in this embodiment covers various attribute features as much as possible, for example, includes various expressions, orientation angles, body types, and the like. Optionally, in this embodiment, the plurality of persons included in the sample image may be divided into a sample background person and a sample target person. The initial network model may be a pre-constructed network model that includes a portrait decoupling network and a classification network. The portrait decoupling network is used for identifying the feature vectors of people in the original image, the classification network is used for determining the attribute feature vectors and the portrait feature vectors of people in the original image in a classification mode, and specifically, the portrait decoupling network is used for comparing the feature vectors of people in the original image with the standard vectors in the classification network to obtain the attribute feature vectors and the portrait feature vectors of people in the original image. The portrait decoupling network further comprises a feature decoupling network and a portrait feature decoupling network.
Optionally, in the process of training the initial network model, the portrait decoupling network is mainly trained, and specifically, the attribute feature decoupling network and the portrait feature decoupling network in the portrait decoupling network may be separately and independently trained by using sample image data. For example, the attribute feature decoupling network can be trained by adopting each human image and the attribute feature vector thereof in the sample image; and training the portrait characteristic decoupling network by adopting each portrait image and the portrait characteristic vector in the sample image. In addition, after the attribute feature decoupling network is trained by adopting the sample image, the human image feature decoupling network is trained by adopting the sample image; and the two training processes can be started simultaneously, and the attribute feature decoupling network and the portrait feature decoupling network are trained by adopting the sample image data.
It should be noted that, in this step, the sample image is input into the initial network model, and various parameters in the initial network model are trained, and the network structure of the trained initial network model does not change, but only the values of the parameters of the initial network model are adjusted.
S202, inputting the verification image into the trained initial network model to obtain the attribute feature vector and the portrait feature vector of the person in the verification image.
The verification image may be verification image data used for verifying whether the trained initial network model meets the requirements, and may be selected in the process of acquiring the sample image, for example, in the process of acquiring the sample image, a certain proportion (e.g., 80%) of image data in the acquired image is taken as the sample image, and the remaining proportion (e.g., 20%) of image data is taken as the verification image. It is also possible to specifically select a person image different from the sample image as the verification image. Optionally, in order to ensure the accuracy of determining whether the trained initial network model meets the requirement, at least two sets of verification image data may be selected to verify the trained initial network model, where each set of verification image data includes a verification background person and a verification target person.
Optionally, in this step, a pre-obtained verification image may be input into the trained initial network model, and the initial network model may process the input verification image according to the trained parameters to obtain the attribute feature vector and the portrait feature vector of the person in the verification image, where a specific processing process is similar to the process of obtaining the attribute feature vector and the portrait feature vector of the person in the original image through the confrontation network model in the above embodiment, and is not repeated here. For example, as shown in fig. 2B, a target person image and a background person image in a verification image may be respectively input into a portrait decoupling network of a trained initial network model, a set of feature vectors is obtained for the target person, a set of feature vectors is also obtained for the background person, then, the two sets of feature vectors are classified and identified through a classification network, and an attribute feature vector and a portrait feature vector in each set of feature vectors are determined, so that an attribute feature vector and a portrait feature vector of the verification target person and an attribute feature vector and a portrait feature vector of the verification background person are obtained.
And S203, obtaining a verification target image according to the attribute feature vector of the verification background person and the portrait feature vector of the verification target person.
Optionally, this step is similar to the process of obtaining the target image according to the attribute feature vector of the background person and the portrait feature vector of the target person in the foregoing embodiment, and is not repeated here. Illustratively, as shown in fig. 2B, the attribute feature vector of the verification background person and the portrait feature vector of the verification target person obtained in S202 are subjected to image fusion to generate a verification target image including a new person.
And S204, judging whether the similarity between the person in the verification target image and the verification target person is smaller than a similarity threshold, if so, executing S205, otherwise, acquiring a new sample image, and returning to execute S201.
Alternatively, in this step, the similarity between the newly generated person in the verification target image generated in S203 and the verification target person may be compared, so as to determine whether the newly generated person is the verification target person. Specifically, it may be determined whether the similarity between the newly generated person and the target person in the verification image in S203 is smaller than a preset similarity threshold (e.g., 90%), if so, it indicates that the person in the newly generated verification target image is different from the target person in the verification image, at this time, S205 is executed, and the initial network model is used as the final confrontation network model; otherwise, the parameters of the trained initial network model are not adjusted, and a next group of sample images needs to be obtained again to continue training the initial network model at the moment. Exemplarily, as shown in fig. 2B, the similarity between the person in the verification target image and the verification target person is calculated by the similarity determination module, and it is determined whether the similarity is smaller than a similarity threshold, if so, it is indicated that the training of the initial network model is completed, otherwise, sample data needs to be obtained again to continue training the initial network model.
And S205, if the similarity between the person in the verification target image and the verification target person is less than the similarity threshold value, the initial network model is used as a countermeasure network model.
And S206, obtaining the attribute feature vector and the portrait feature vector of the person in the original image through the confrontation network model. Wherein the persons in the original image include a background person and a target person.
And S207, obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person. Wherein the person in the target image is different from the person in the original image.
When training an initial network model for character feature recognition, a conventional scheme can only recognize character features in a sample image by using a network model trained by using the sample image, for example, if the network model trained by using sample images of a character a and a character B can only recognize character features of the character a and the character B during use, and further can only generate a target image according to the characteristics of the character a and the character B, and cannot be applied to feature recognition of other untrained characters. In the method of the embodiment, the sample image for training the confrontation network model does not need to cover the character images of all the characters to be recognized, and the confrontation network model after training can also perform character feature recognition on any character. For example, the confrontation network model trained using the sample images of the person a and the person B can perform not only the person feature recognition on the person a and the person B but also the person feature recognition on the other persons.
The embodiment of the disclosure provides an image generation method, which includes the steps of training an initial network model by a sample image, verifying the trained initial network model by a verification image, verifying whether a new person in a verification target image generated by the trained initial network model is a target person in the verification image, if not, finishing training of the initial network model to obtain a confrontation network model, obtaining an attribute feature vector and a portrait feature vector of each person in the original image based on the confrontation network model, and selecting the attribute feature vector of a background person and the portrait feature vector of the target person in the original image to generate the target image. According to the scheme of the embodiment of the invention, the confrontation network model capable of identifying the portrait characteristics and the attribute characteristics of the figures in any image can be trained by adopting a small number of sample images, the training cost is reduced, the application range of the confrontation network model is improved, the transition of the fusion area of the target image generated by adopting the confrontation network model is more natural, and the effect is more vivid.
Fig. 3 is a schematic structural diagram of an image generation apparatus provided by an embodiment of the present disclosure, which is applicable to a case where a new person image is generated from an image including different persons, for example, a case where two different persons are face-changed to generate a new person image different from the two persons. The apparatus may be implemented by software and/or hardware and integrated in an electronic device executing the method, as shown in fig. 3, the apparatus may include:
the vector acquisition module 301 is configured to obtain attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, where the people in the original image include a background person and a target person;
an image generating module 302, configured to obtain a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, where the person in the target image is different from the person in the original image.
The embodiment of the disclosure provides an image generation device, which obtains an attribute feature vector and a portrait feature vector of each character in an original image through a confrontation network model, and selects the attribute feature vector of a background character and the portrait feature vector of a target character to generate a target image because the original image comprises the background character and the target character. The character in the target image generated by the scheme of the embodiment of the disclosure is a new character image which is generated by fusing the character in the original image and is different from the character in the original image, and due to the addition of the attribute feature vector, the problem of poor aesthetic degree of the newly generated image caused by different character attribute features (such as the size and the angle of the region to be fused) in the fusion process is avoided, so that the transition of the new image fusion region is more natural, and the effect is more vivid.
Further, the confrontation network model includes a portrait decoupling network and a classification network, and the vector obtaining module 301 is specifically configured to:
obtaining a feature vector of a figure in the original image through the figure decoupling network;
and obtaining attribute feature vectors and portrait feature vectors of people in the original image through the classification network.
Further, when the vector obtaining module 301 obtains the attribute feature vector and the portrait feature vector of the person in the original image through the classification network, the vector obtaining module is specifically configured to:
and comparing the characteristic vector of the person in the original image with the standard vector in the classification network to obtain the attribute characteristic vector and the portrait characteristic vector of the person in the original image.
Further, the portrait decoupling network sets a kulbeck-leibler divergence loss function constraint.
Further, the apparatus further comprises: a model training module to:
inputting a sample image into an initial network model, and training the initial network model;
inputting a verification image into the trained initial network model to obtain an attribute feature vector and a portrait feature vector of a figure in the verification image, wherein the verification image comprises a verification background figure and a verification target figure;
obtaining a verification target image according to the attribute feature vector of the verification background figure and the portrait feature vector of the verification target figure;
and if the similarity between the person in the verification target image and the verification target person is less than the similarity threshold value, the initial network model is used as the countermeasure network model.
Further, the step of the person in the target image being different from the person in the original image comprises:
the person in the target image is the background person and the person of the target person after face changing; or the person in the target image is the person obtained by fusing the background person and the target person.
Further, the attribute feature vector includes: at least one of a character expression vector, an orientation angle vector, a character body type vector and a background feature vector; the portrait feature vector includes at least one of a facial feature vector, a hair feature vector, and a limb feature vector.
The image generating apparatus provided by the embodiment of the present disclosure belongs to the same inventive concept as the image generating method provided by the above embodiments, and the technical details that are not described in detail in the embodiment of the present disclosure can be referred to the above embodiments, and the embodiment of the present disclosure has the same beneficial effects as the above embodiments.
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiment of the present disclosure may be a device corresponding to a backend service platform of an application program, and may also be a mobile terminal device installed with an application program client. In particular, the electronic device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a stationary terminal such as a digital TV, a desktop computer, etc. The electronic device 400 shown in fig. 4 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some implementations, the electronic devices may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the internal processes of the electronic device to perform: obtaining attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, wherein the people in the original image comprise background people and target people; and obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image generating method including:
obtaining attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, wherein the people in the original image comprise background people and target people;
and obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
According to one or more embodiments of the present disclosure, in the above method, the confrontation network model includes a portrait decoupling network and a classification network, and obtaining the attribute feature vector and the portrait feature vector of the person in the original image through the confrontation network model includes:
obtaining a feature vector of a figure in the original image through the figure decoupling network;
and obtaining attribute feature vectors and portrait feature vectors of people in the original image through the classification network.
According to one or more embodiments of the present disclosure, in the above method, obtaining attribute feature vectors and portrait feature vectors of people in an original image through the classification network includes:
and comparing the characteristic vector of the person in the original image with the standard vector in the classification network to obtain the attribute characteristic vector and the portrait characteristic vector of the person in the original image.
In the above method, according to one or more embodiments of the present disclosure, the portrait decoupling network sets a curebeck-leibler divergence loss function constraint.
According to one or more embodiments of the present disclosure, the method further includes:
inputting a sample image into an initial network model, and training the initial network model;
inputting a verification image into the trained initial network model to obtain an attribute feature vector and a portrait feature vector of a figure in the verification image, wherein the verification image comprises a verification background figure and a verification target figure;
obtaining a verification target image according to the attribute feature vector of the verification background figure and the portrait feature vector of the verification target figure;
and if the similarity between the person in the verification target image and the verification target person is less than the similarity threshold value, the initial network model is used as the countermeasure network model.
According to one or more embodiments of the present disclosure, the above method, wherein the step of identifying the person in the target image is different from the person in the original image comprises:
the person in the target image is the background person and the person of the target person after face changing; or the person in the target image is the person obtained by fusing the background person and the target person.
According to one or more embodiments of the present disclosure, in the above method, the attribute feature vector includes: at least one of a character expression vector, an orientation angle vector, a character body type vector and a background feature vector; the portrait feature vector includes at least one of a facial feature vector, a hair feature vector, and a limb feature vector.
According to one or more embodiments of the present disclosure, there is provided an image generating apparatus including:
the vector acquisition module is used for acquiring attribute feature vectors and portrait feature vectors of people in the original image through the confrontation network model, wherein the people in the original image comprise background people and target people;
and the image generation module is used for obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
According to one or more embodiments of the present disclosure, the confrontation network model in the above apparatus includes a portrait decoupling network and a classification network, and the vector obtaining module is specifically configured to:
obtaining a feature vector of a figure in the original image through the figure decoupling network;
and obtaining attribute feature vectors and portrait feature vectors of people in the original image through the classification network.
According to one or more embodiments of the present disclosure, when the vector obtaining module in the above apparatus obtains the attribute feature vector and the portrait feature vector of a person in an original image through the classification network, the vector obtaining module is specifically configured to:
and comparing the characteristic vector of the person in the original image with the standard vector in the classification network to obtain the attribute characteristic vector and the portrait characteristic vector of the person in the original image.
According to one or more embodiments of the present disclosure, the portrait decoupling network in the above apparatus sets a kulbeck-leibler divergence loss function constraint.
According to one or more embodiments of the present disclosure, the apparatus further includes: a model training module to:
inputting a sample image into an initial network model, and training the initial network model;
inputting a verification image into the trained initial network model to obtain an attribute feature vector and a portrait feature vector of a figure in the verification image, wherein the verification image comprises a verification background figure and a verification target figure;
obtaining a verification target image according to the attribute feature vector of the verification background figure and the portrait feature vector of the verification target figure;
and if the similarity between the person in the verification target image and the verification target person is less than the similarity threshold value, the initial network model is used as the countermeasure network model.
According to one or more embodiments of the present disclosure, the step of identifying the person in the target image in the device different from the person in the original image comprises:
the person in the target image is the background person and the person of the target person after face changing; or the person in the target image is the person obtained by fusing the background person and the target person.
According to one or more embodiments of the present disclosure, the attribute feature vector in the above apparatus includes: at least one of a character expression vector, an orientation angle vector, a character body type vector and a background feature vector; the portrait feature vector includes at least one of a facial feature vector, a hair feature vector, and a limb feature vector.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image generation method as in any embodiment of the present disclosure.
According to one or more embodiments of the present disclosure, a readable medium is provided, on which a computer program is stored, which when executed by a processor, implements an image generation method according to any of the embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An image generation method, comprising:
obtaining attribute feature vectors and portrait feature vectors of people in an original image through a confrontation network model, wherein the people in the original image comprise background people and target people;
and obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
2. The method of claim 1, wherein the confrontation network model comprises a portrait decoupling network and a classification network, and the obtaining of the attribute feature vector and the portrait feature vector of the person in the original image through the confrontation network model comprises:
obtaining a feature vector of a figure in the original image through the figure decoupling network;
and obtaining attribute feature vectors and portrait feature vectors of people in the original image through the classification network.
3. The method of claim 2, wherein obtaining the attribute feature vector and the portrait feature vector of the person in the original image through the classification network comprises:
and comparing the characteristic vector of the person in the original image with the standard vector in the classification network to obtain the attribute characteristic vector and the portrait characteristic vector of the person in the original image.
4. The method of claim 2, wherein the portrait decoupling network sets a curebeck-leibler divergence loss function constraint.
5. The method of claim 1, wherein before obtaining the attribute feature vectors and the portrait feature vectors of the people in the original image through the confrontation network model, the method further comprises:
inputting a sample image into an initial network model, and training the initial network model;
inputting a verification image into the trained initial network model to obtain an attribute feature vector and a portrait feature vector of a figure in the verification image, wherein the verification image comprises a verification background figure and a verification target figure;
obtaining a verification target image according to the attribute feature vector of the verification background figure and the portrait feature vector of the verification target figure;
and if the similarity between the person in the verification target image and the verification target person is less than the similarity threshold value, the initial network model is used as the countermeasure network model.
6. The method of any of claims 1-5, wherein the person in the target image being different from the person in the original image comprises:
the person in the target image is the background person and the person of the target person after face changing; or the person in the target image is the person obtained by fusing the background person and the target person.
7. The method of any of claims 1-5, wherein the attribute feature vector comprises: at least one of a character expression vector, an orientation angle vector, a character body type vector and a background feature vector; the portrait feature vector includes: at least one of a facial feature vector, a hair feature vector, and a limb feature vector.
8. An image generation apparatus, comprising:
the vector acquisition module is used for acquiring attribute feature vectors and portrait feature vectors of people in the original image through the confrontation network model, wherein the people in the original image comprise background people and target people;
and the image generation module is used for obtaining a target image according to the attribute feature vector of the background person and the portrait feature vector of the target person, wherein the person in the target image is different from the person in the original image.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image generation method of any one of claims 1-7.
10. A readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image generation method of any one of claims 1 to 7.
CN201910913767.6A 2019-09-25 2019-09-25 Image generation method and device, electronic equipment and storage medium Active CN110619602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910913767.6A CN110619602B (en) 2019-09-25 2019-09-25 Image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910913767.6A CN110619602B (en) 2019-09-25 2019-09-25 Image generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110619602A true CN110619602A (en) 2019-12-27
CN110619602B CN110619602B (en) 2024-01-09

Family

ID=68924626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910913767.6A Active CN110619602B (en) 2019-09-25 2019-09-25 Image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110619602B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369468A (en) * 2020-03-09 2020-07-03 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111640166A (en) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 AR group photo method, AR group photo device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303229A (en) * 2016-08-04 2017-01-04 努比亚技术有限公司 A kind of photographic method and device
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107316020A (en) * 2017-06-26 2017-11-03 司马大大(北京)智能系统有限公司 Face replacement method, device and electronic equipment
CN108510435A (en) * 2018-03-28 2018-09-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US20190005657A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Multiple targets-tracking method and apparatus, device and storage medium
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
CN109919891A (en) * 2019-03-14 2019-06-21 Oppo广东移动通信有限公司 Imaging method, device, terminal and storage medium
CN110070049A (en) * 2019-04-23 2019-07-30 北京市商汤科技开发有限公司 Facial image recognition method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303229A (en) * 2016-08-04 2017-01-04 努比亚技术有限公司 A kind of photographic method and device
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107316020A (en) * 2017-06-26 2017-11-03 司马大大(北京)智能系统有限公司 Face replacement method, device and electronic equipment
US20190005657A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Multiple targets-tracking method and apparatus, device and storage medium
CN108510435A (en) * 2018-03-28 2018-09-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
CN109919891A (en) * 2019-03-14 2019-06-21 Oppo广东移动通信有限公司 Imaging method, device, terminal and storage medium
CN110070049A (en) * 2019-04-23 2019-07-30 北京市商汤科技开发有限公司 Facial image recognition method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369468A (en) * 2020-03-09 2020-07-03 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111369468B (en) * 2020-03-09 2022-02-01 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111640166A (en) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 AR group photo method, AR group photo device, computer equipment and storage medium
CN111640166B (en) * 2020-06-08 2024-03-26 上海商汤智能科技有限公司 AR group photo method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110619602B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN112989904B (en) Method for generating style image, method, device, equipment and medium for training model
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN111784566B (en) Image processing method, migration model training method, device, medium and equipment
CN111476871B (en) Method and device for generating video
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN110570383B (en) Image processing method and device, electronic equipment and storage medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
CN109829432A (en) Method and apparatus for generating information
CN111696176A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN114092678A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111833242A (en) Face transformation method and device, electronic equipment and computer readable medium
CN111968029A (en) Expression transformation method and device, electronic equipment and computer readable medium
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN110689546A (en) Method, device and equipment for generating personalized head portrait and storage medium
CN115311178A (en) Image splicing method, device, equipment and medium
CN115965840A (en) Image style migration and model training method, device, equipment and medium
CN114913061A (en) Image processing method and device, storage medium and electronic equipment
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN110689478A (en) Image stylization processing method and device, electronic equipment and readable medium
CN114049417A (en) Virtual character image generation method and device, readable medium and electronic equipment
CN114418835B (en) Image processing method, device, equipment and medium
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111784726A (en) Image matting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant