CN111340865A - Method and apparatus for generating image - Google Patents
Method and apparatus for generating image Download PDFInfo
- Publication number
- CN111340865A CN111340865A CN202010112247.8A CN202010112247A CN111340865A CN 111340865 A CN111340865 A CN 111340865A CN 202010112247 A CN202010112247 A CN 202010112247A CN 111340865 A CN111340865 A CN 111340865A
- Authority
- CN
- China
- Prior art keywords
- image
- line drawing
- dimensional
- dimensional line
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure discloses a method and a device for generating an image. One embodiment of the method comprises: acquiring images respectively shot from at least two angles aiming at a target face; carrying out style migration on the acquired image to obtain a two-dimensional line drawing of the target face; fusing the two-dimensional line drawings and generating a line drawing expansion diagram corresponding to a fusion result; and generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model. The scheme provided by the embodiment of the disclosure can generate the three-dimensional line drawing, and the presentation mode of the three-dimensional face image is enriched by adopting the style of the line drawing.
Description
Technical Field
The disclosed embodiments relate to the field of computer technologies, and in particular, to a method and an apparatus for generating an image.
Background
With the spread of terminals, more and more users have started to perform various operations other than calling using terminals such as mobile phones. For example, users can watch videos using these terminals. The portable terminals such as the mobile phone and the like can enable users to watch videos anytime and anywhere, and the development of a live broadcast platform and a short video platform is promoted. The user can record the video by himself and upload the video to the platforms for the audience to watch.
In scenes such as self-shooting and live broadcasting, a user can perform various operations on own image in a picture, so that the interest of the picture is improved. For example, the user may add special effects to the face, such as a sticker.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for generating an image.
In a first aspect, an embodiment of the present disclosure provides a method for generating an image, the method including: acquiring images respectively shot from at least two angles aiming at a target face; carrying out style migration on the acquired image to obtain a two-dimensional line drawing of the target face; fusing the two-dimensional line drawings and generating a line drawing expansion diagram corresponding to a fusion result; and generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model.
In some embodiments, performing style migration on the acquired image to obtain a two-dimensional line drawing of the target face includes: and inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network.
In some embodiments, generating the countermeasure network comprises a generator comprising a preset number of sub-generators, for each preset part in the image containing the face there being a sub-generator for processing the preset part; inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network, wherein the two-dimensional line drawing comprises the following steps: segmenting the acquired image to obtain preset parts of preset quantity in the acquired image; for each obtained preset part, inputting the preset part into a sub-generator for processing the preset part to obtain a local two-dimensional line tracing diagram corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator; the obtained local two-dimensional line drawings are fused to generate a two-dimensional line drawing of the target human face output from the generated confrontation network.
In some embodiments, generating the countermeasure network includes a generator and an arbiter; the generation of the countermeasure network is trained by: inputting the face image in the training sample into a generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image; inputting the output two-dimensional line drawings and the reference two-dimensional line drawings into a discriminator, and judging whether the output two-dimensional line drawings and the reference two-dimensional line drawings are the same images or not by using the discriminator; if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training a generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image; and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
In some embodiments, generating a three-dimensional line drawing of the target face based on the line drawing expansion diagram and a preset three-dimensional face model comprises: determining a local line-drawing expanded drawing containing preset five sense organs in the line-drawing expanded drawing; and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and a preset three-dimensional face model.
In some embodiments, generating a three-dimensional line drawing of the target face based on the local line drawing expansion diagram and a preset three-dimensional face model comprises: updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color; and combining the updated local line drawing expansion image with a preset three-dimensional face model to obtain a three-dimensional line drawing of the target face.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating an image, the apparatus including: an acquisition unit configured to acquire images respectively photographed from at least two angles with respect to a target face; the style migration unit is configured to perform style migration on the acquired image to obtain a two-dimensional line drawing of the target face; a first generation unit configured to fuse the respective two-dimensional line drawings and generate a line drawing expansion diagram corresponding to a result of the fusion; and the second generation unit is configured to generate a three-dimensional line drawing of the target human face based on the line drawing expansion diagram and a preset three-dimensional human face model.
In some embodiments, the style migration unit is further configured to perform style migration on the acquired image to obtain the two-dimensional line drawing of the target face as follows: and inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network.
In some embodiments, generating the countermeasure network comprises a generator comprising a preset number of sub-generators, for each preset part in the image containing the face there being a sub-generator for processing the preset part; a style migration unit further configured to perform inputting the acquired image into a pre-trained generated confrontation network, resulting in a two-dimensional line drawing of the target face output from the generated confrontation network, as follows: segmenting the acquired image to obtain preset parts of preset quantity in the acquired image; for each obtained preset part, inputting the preset part into a sub-generator for processing the preset part to obtain a local two-dimensional line tracing diagram corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator; the obtained local two-dimensional line drawings are fused to generate a two-dimensional line drawing of the target human face output from the generated confrontation network.
In some embodiments, generating the countermeasure network includes a generator and an arbiter; the generation of the countermeasure network is trained by: inputting the face image in the training sample into a generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image; inputting the output two-dimensional line drawings and the reference two-dimensional line drawings into a discriminator, and judging whether the output two-dimensional line drawings and the reference two-dimensional line drawings are the same images or not by using the discriminator; if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training a generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image; and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
In some embodiments, the second generating unit is further configured to generate the three-dimensional line drawing of the target face based on the line drawing expansion diagram and the preset three-dimensional face model as follows: determining a local line-drawing expanded drawing containing preset five sense organs in the line-drawing expanded drawing; and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and a preset three-dimensional face model.
In some embodiments, the second generating unit is further configured to perform generating the three-dimensional line drawing of the target face based on the local line drawing expansion map and the preset three-dimensional face model as follows: updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color; and combining the updated local line drawing expansion image with a preset three-dimensional face model to obtain a three-dimensional line drawing of the target face.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for generating the image, the images respectively shot from at least two angles aiming at the target face are firstly obtained. And then carrying out style migration on the acquired image to obtain a two-dimensional line drawing of the target face. Then, the two-dimensional line drawings are fused, and a line drawing expansion diagram corresponding to the fusion result is generated. And finally, generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model. The scheme provided by the embodiment of the disclosure can generate the three-dimensional line drawing, and the presentation mode of the three-dimensional face image is enriched by adopting the style of the line drawing. In addition, the embodiment of the disclosure can acquire more comprehensive texture information of the face by using images shot at multiple angles to generate a three-dimensional image, so that the generated image is more accurate and closer to the shot real face.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2a is a flow diagram of one embodiment of a method for generating an image according to the present disclosure;
FIG. 2b is a schematic diagram of a camera position for a method of processing video frames according to the present disclosure;
FIG. 3 is a schematic illustration of an application scenario of a method for generating an image according to the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for generating an image according to the present disclosure;
FIG. 5 is a schematic block diagram of one embodiment of an apparatus for generating an image according to the present disclosure;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which a video frame processing method or a video frame processing apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a live application, a short video application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that provides support for three-dimensional animations displayed on the terminal devices 101, 102, 103. The background server may analyze and perform other processing on the received images respectively photographed from at least two angles with respect to the target face, and feed back a processing result (e.g., a three-dimensional line drawing of the target face) to the terminal device.
It should be noted that the video frame processing method provided by the embodiment of the present disclosure may be executed by the server 105 or the terminal devices 101, 102, and 103, and accordingly, the video frame processing apparatus may be disposed in the server 105 or the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2a, a flow 200 of one embodiment of a method for generating an image according to the present disclosure is shown. The method for generating an image comprises the following steps:
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for generating an image may acquire an image taken of a face of a subject. Specifically, the acquired images are taken from at least two angles. For example, the target face may be shot by at least two cameras, or shot by the same camera from different angles. The image may be a video frame or may be an independent discontinuous image.
As shown in fig. 2b, the camera can capture the face of the subject at multiple angles at multiple locations. Wherein the difference between the shooting angle of the main shooting position (shown as a solid arrow in fig. 2 b) and the frontal angle of the human face does not exceed a preset angle, such as 30 ° or 45 °. In addition, the auxiliary shooting position (such as a dotted arrow in fig. 2 b) can be shot from the side, wherein the auxiliary shooting position in the spherical cloth type can acquire clearer and more detailed texture information for the forehead and the chin of the human face.
In this embodiment, the executing subject may perform style migration (style migration) on the acquired image, and the result of the style migration may be a two-dimensional line drawing of the target face. A two-dimensional line drawing refers to an image drawn with lines of one color, such as an image drawn with black lines on a white background.
In practice, the execution subject may perform style migration in various ways to obtain a two-dimensional line drawing. For example, the executing entity may use an edge detection algorithm to determine a two-dimensional line graph corresponding to the acquired image.
In step 203, the two-dimensional line drawings are fused to generate a line drawing expansion diagram corresponding to the fusion result.
In this embodiment, the execution body may fuse the obtained two-dimensional line drawings, and generate a line drawing expansion diagram corresponding to a result of fusing the two-dimensional line drawings. The fusion here may refer to performing key point alignment and fusing the alignment result into a personal face image. In practice, the line-drawing expansion diagram can be attached to a preset three-dimensional human face model. Because a mapping relation exists between the preset three-dimensional face model and the two-dimensional plane, the executing body can attach the two-dimensional expansion map to the preset three-dimensional face model based on the key point as long as the executing body acquires the key point of the two-dimensional expansion map.
The preset three-dimensional face model may include key points of the three-dimensional face. The expansion image is a two-dimensional image and is a texture expansion image of the three-dimensional face, and the image can be attached to a three-dimensional face model to form the three-dimensional face according with the characteristics of the face in the expansion image. In practice, the execution subject may input the fusion result (or the fusion result and other information) into a preset model to generate a line-drawing expansion diagram. The preset model can be used for representing the corresponding relation between the two-dimensional face image obtained by shooting, such as the fusion result and the expansion image. Or the preset model can also be used for representing the corresponding relation between the two-dimensional face image obtained by shooting, other information (such as pose information of the face) and the expansion map. The two-dimensional face image captured here may be an original image captured, or may be an image obtained by subjecting the original image to a non-style transition process.
And step 204, generating a three-dimensional line drawing of the target face based on the line drawing expansion diagram and a preset three-dimensional face model.
In this embodiment, the execution subject may generate a three-dimensional line drawing of the target face based on the line drawing expansion diagram and the preset three-dimensional face model. Specifically, the executing body may fit the line-drawing expansion diagram to a preset three-dimensional face model, so as to obtain a three-dimensional line-drawing that conforms to the facial features of the target face. The three-dimensional line drawing may be an image obtained by performing processing such as polishing, and therefore, a bright place and a shadow can be displayed in the three-dimensional line drawing to present a more realistic three-dimensional stereoscopic effect.
In some optional implementations of this embodiment, step 204 may include: determining a local line-drawing expanded drawing containing preset five sense organs in the line-drawing expanded drawing; and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and a preset three-dimensional face model.
In these alternative implementations, the execution subject may determine only the corresponding part of the five sense organs in the line drawing expansion diagram, that is, the partial line drawing expansion diagram. Then, the execution main body may generate a three-dimensional line drawing of the target face based on the local line drawing expansion diagram and the preset three-dimensional face model, for example, the local line drawing expansion diagram may be attached to the preset three-dimensional face model, so as to obtain the three-dimensional line drawing of the target face. The preset five sense organs herein may include at least one of the following: eyebrows, eyes, ears, nose, mouth. In the line-drawing expansion, a position other than the partial line-drawing expansion, such as the chin, may include a line formed by a skin fold. Because the position of the line or the surrounding of the position may contain fewer key points, the fitting of the line to the three-dimensional model may be inaccurate, that is, the three-dimensional position of the line in the three-dimensional line drawing presented after fitting does not completely correspond to the three-dimensional position of the line on the actual face.
The implementation modes can avoid the problem that the three-dimensional line drawing is inaccurate due to the fact that the key points of the lines at other positions except the preset five sense organs are sparse at the positions and the like.
In some optional application scenarios of these implementations, the generating a three-dimensional line drawing of the target face based on the local line drawing expansion diagram and the preset three-dimensional face model may include: updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color; and combining the updated local line drawing expansion image with a preset three-dimensional face model to obtain a three-dimensional line drawing of the target face.
In these optional application scenarios, the execution subject may update the base color of the local line-drawing expansion diagram. Specifically, the preset color used for updating may be white, gray, or a color determined in real time, etc. In practice, the ground color of the local line drawing expansion image may be greatly different from the ground color in the three-dimensional human face model. If the local line drawing expansion image is directly attached to the three-dimensional face model, the bottom color of the three-dimensional line drawing obtained by attaching may be very uneven. Therefore, the execution subject can update the background color of the local line-drawing expansion image, for example, the background color is updated to the background color in the three-dimensional human face model. Furthermore, the execution subject may determine an average color of the line drawing expansion map, and use the average color as a base color after the local line drawing expansion map is updated.
The application scenes can avoid the problem of uneven color in the generated three-dimensional line drawings, thereby ensuring that the obtained three-dimensional line drawings are more vivid and natural.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the video frame processing method according to the present embodiment. In the application scenario of fig. 3, an executing subject 301 acquires a plurality of images 302 taken from 3 angles, respectively, for a target face. The style of the acquired images 302 is transferred to obtain a two-dimensional line drawing 303 of the target face. The two-dimensional line drawings 303 are fused together, and a line drawing expansion diagram 304 corresponding to the fusion result is generated. Based on the line drawing expansion image and the preset three-dimensional face model, a three-dimensional line drawing 305 of the target face is generated.
The method provided by the embodiment of the disclosure can generate the three-dimensional line drawing, and the presentation mode of the three-dimensional face image is enriched by adopting the style of the line drawing. In addition, the embodiment of the disclosure can acquire more comprehensive texture information of the face by using images shot at multiple angles to generate a three-dimensional image, so that the generated image is more accurate and closer to the shot real face.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for generating an image is shown. The flow 400 of the method for generating an image comprises the steps of:
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for generating an image may acquire an image taken of a face of a subject. Specifically, the acquired images are taken from at least two angles. For example, the target face may be shot by at least two cameras, or shot by the same camera from different angles. The image may be a video frame or may be an independent discontinuous image.
And step 402, inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network.
In this embodiment, the executing entity may input each acquired image into a pre-trained synthetic confrontation network, so as to perform style migration on the image by using the synthetic confrontation network, thereby obtaining a two-dimensional line drawing output by the network. The pre-trained generated confrontation network can be used to characterize the correspondence between the captured image (or captured image that has undergone no style migration processing) and the two-dimensional line drawing.
In some optional implementations of this embodiment, generating the countermeasure network includes a generator that includes a preset number of sub-generators, for each preset part in the image containing the face, there being a sub-generator for processing the preset part; inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network, wherein the two-dimensional line drawing comprises the following steps: segmenting the acquired image to obtain preset parts of preset quantity in the acquired image; for each obtained preset part, inputting the preset part into a sub-generator for processing the preset part to obtain a local two-dimensional line tracing diagram corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator; the obtained local two-dimensional line drawings are fused to generate a two-dimensional line drawing of the target human face output from the generated confrontation network.
In these alternative implementations, the generator in the generation countermeasure network may include a number of sub-generators, the number of sub-generators being a preset number. Each sub-generator corresponds to a predetermined part of the face image (image containing the face). Each preset part may be obtained by setting the face in advance (for example, setting according to coordinates of key points), that is, all the preset parts may constitute the whole face. For example, the preset parts may be a left eye region, a right eye region, a nose region, a mouth region, a hair region, and a background region, respectively, and accordingly, the preset number may be 6. Each sub-generator may process a predetermined portion of the target face to obtain a local two-dimensional line graph output from the sub-generator. Then, the execution body may fuse the local two-dimensional line drawings generated by all the sub-generators into a two-dimensional line drawing by using a fusion network.
In the training process of generating the countermeasure network, each sub-generator learns the texture features of the corresponding preset part of the sub-generator, that is, different sub-generators learn the features of different preset parts, so that the different sub-generators have the independence of the training process from each other.
The realization modes can segment the face image and process the preset local parts by the sub-generator aiming at each preset local part, so that the full texture analysis can be carried out on each local part of the image, the accuracy of processing the image is effectively improved, and the generated two-dimensional line tracing image is closer to the shot real face.
In some optional implementations of the present embodiment, generating the countermeasure network includes a generator and a discriminator; the generation of the countermeasure network is trained by: inputting the face image in the training sample into a generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image; inputting the output two-dimensional line drawings and the reference two-dimensional line drawings into a discriminator, and judging whether the output two-dimensional line drawings and the reference two-dimensional line drawings are the same images or not by using the discriminator; if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training a generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image; and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
In these alternative implementations, the executing entity may learn features of the face image by using the generator, so as to implement training of the generator, that is, training of the countermeasure network. Then, the executing agent may determine whether the generator is successfully trained by using the arbiter, stop training if the training is successful, and continue training if the training is not successful. Specifically, the executing subject may input the face image in the training sample into the generator, and obtain an output two-dimensional line drawing. Thereafter, the execution body may determine whether the two-dimensional line drawing generated by the generator and the reference two-dimensional line drawing are images of the same kind by a discriminator for classification. The reference two-dimensional line drawing can be an image labeled manually and used as a training target. If the result of the judgment is the same type of image, that is, the training is judged to be successful, the executing body may use the generated confrontation network obtained by the current training as the generated confrontation network after the training, that is, the generated confrontation network trained in advance. If the result of the determination is not a homogeneous image, that is, it is determined that the training is not successful, the execution subject may continue training the generator based on the output two-dimensional line drawing and the reference two-dimensional line drawing.
The realization modes can judge whether the line drawing style diagram generated by the generator meets the standard through the discriminator, so that the generated confrontation network obtained by training is more accurate, and the two-dimensional line drawing diagram generated by the generated confrontation network is accurate and has a more standard line drawing style.
At step 403, the two-dimensional line drawings are fused to generate a line drawing expansion diagram corresponding to the fusion result.
In this embodiment, the execution body may fuse the obtained two-dimensional line drawings, and generate a line drawing expansion diagram corresponding to a result of fusing the two-dimensional line drawings. Specifically, the line-drawing expansion diagram can be attached to a preset three-dimensional human face model. Because there is a mapping relationship that maps from the preset three-dimensional face model to the two-dimensional image, the execution subject can attach the two-dimensional expansion map to the preset three-dimensional face model based on the key point as long as the key point of the two-dimensional expansion map is obtained.
And step 404, generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model.
In this embodiment, the execution subject may generate a three-dimensional line drawing of the target face based on the line drawing expansion diagram and the preset three-dimensional face model. Specifically, the executing body may fit the line-drawing expansion diagram to a preset three-dimensional face model, so as to obtain a three-dimensional line-drawing that conforms to the facial features of the target face. The three-dimensional line drawing may be an image obtained by performing processing such as polishing, and therefore, a bright place and a shadow can be displayed in the three-dimensional line drawing to present a more realistic three-dimensional stereoscopic effect.
The embodiment generates the two-dimensional line tracing by using the generation confrontation network, can improve the accuracy of the two-dimensional line tracing, and is further beneficial to improving the accuracy of the generated three-dimensional line tracing.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating an image, which corresponds to the method embodiment shown in fig. 2a, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for generating an image of the present embodiment includes: an acquisition unit 501, a style migration unit 502, a first generation unit 503, and a second generation unit 504. The acquiring unit 501 is configured to acquire images respectively photographed from at least two angles with respect to a target face; a style migration unit 502 configured to perform style migration on the acquired image to obtain a two-dimensional line drawing of the target face; a first generation unit 503 configured to fuse the respective two-dimensional line drawings and generate a line drawing expansion diagram corresponding to the fusion result; a second generating unit 504 configured to generate a three-dimensional line drawing of the target face based on the line drawing expansion diagram and a preset three-dimensional face model.
In this embodiment, specific processes of the obtaining unit 501, the style transferring unit 502, the first generating unit 503, and the second generating unit 504 of the apparatus 500 for generating an image and technical effects brought by the specific processes can refer to related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2a, which are not repeated herein.
In some optional implementations of the embodiment, the style migration unit is further configured to perform style migration on the acquired image to obtain the two-dimensional line drawing of the target face as follows: and inputting the acquired image into a pre-trained generated confrontation network to obtain a two-dimensional line drawing of the target face output from the generated confrontation network.
In some optional implementations of this embodiment, generating the countermeasure network includes a generator that includes a preset number of sub-generators, for each preset part in the image containing the face, there being a sub-generator for processing the preset part; a style migration unit further configured to perform inputting the acquired image into a pre-trained generated confrontation network, resulting in a two-dimensional line drawing of the target face output from the generated confrontation network, as follows: segmenting the acquired image to obtain preset parts of preset quantity in the acquired image; for each obtained preset part, inputting the preset part into a sub-generator for processing the preset part to obtain a local two-dimensional line tracing diagram corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator; the obtained local two-dimensional line drawings are fused to generate a two-dimensional line drawing of the target human face output from the generated confrontation network.
In some optional implementations of the present embodiment, generating the countermeasure network includes a generator and a discriminator; the generation of the countermeasure network is trained by: inputting the face image in the training sample into a generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image; inputting the output two-dimensional line drawings and the reference two-dimensional line drawings into a discriminator, and judging whether the output two-dimensional line drawings and the reference two-dimensional line drawings are the same images or not by using the discriminator; if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training a generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image; and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
In some optional implementations of the embodiment, the second generating unit is further configured to perform generating a three-dimensional line drawing of the target face based on the line drawing expansion diagram and the preset three-dimensional face model as follows: determining a local line-drawing expanded drawing containing preset five sense organs in the line-drawing expanded drawing; and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and a preset three-dimensional face model.
In some optional implementations of the embodiment, the second generating unit is further configured to perform generating the three-dimensional line drawing of the target face based on the local line drawing expansion map and the preset three-dimensional face model as follows: updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color; and combining the updated local line drawing expansion image with a preset three-dimensional face model to obtain a three-dimensional line drawing of the target face.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit 501, a style migration unit 502, a first generation unit 503, and a second generation unit 504. The names of the units do not in some cases constitute a limitation on the units themselves, and for example, the acquisition unit may also be described as a "unit that acquires images taken from at least two angles, respectively, for a target face of a person".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring images respectively shot from at least two angles aiming at a target face; carrying out style migration on the acquired image to obtain a two-dimensional line drawing of the target face; fusing the two-dimensional line drawings and generating a line drawing expansion diagram corresponding to a fusion result; and generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (14)
1. A method for generating an image, comprising:
acquiring images respectively shot from at least two angles aiming at a target face;
carrying out style migration on the acquired image to obtain a two-dimensional line drawing of the target face;
fusing the two-dimensional line drawings and generating a line drawing expansion diagram corresponding to a fusion result;
and generating a three-dimensional line drawing of the target face based on the line drawing expansion image and a preset three-dimensional face model.
2. The method of claim 1, wherein the performing style migration on the acquired image to obtain the two-dimensional line drawing of the target face comprises:
and inputting the acquired image into a pre-trained generation confrontation network to obtain a two-dimensional line drawing of the target face output from the generation confrontation network.
3. The method according to claim 2, wherein said generating a countermeasure network comprises a generator comprising a preset number of sub-generators, for each preset part of the image containing a face, there being a said sub-generator for processing the preset part;
the inputting the acquired image into a pre-trained generation confrontation network to obtain a two-dimensional line drawing of the target face output from the generation confrontation network comprises the following steps:
segmenting the acquired image to obtain preset parts of the preset number in the acquired image;
for each obtained preset part, inputting the preset part into the sub-generator for processing the preset part to obtain a local two-dimensional line drawing corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator;
and fusing the obtained local two-dimensional line drawings to generate the two-dimensional line drawing of the target human face output from the generated confrontation network.
4. The method of claim 2, wherein the generating a countermeasure network comprises a generator and an arbiter;
the generation countermeasure network is trained by the following method:
inputting a face image in a training sample into the generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image;
inputting the output two-dimensional line tracing image and the reference two-dimensional line tracing image into the discriminator, and judging whether the output two-dimensional line tracing image and the reference two-dimensional line tracing image are the same type of image or not by using the discriminator;
if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training the generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image;
and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
5. The method of claim 1, wherein the generating a three-dimensional line drawing of the target face based on the line drawing expansion map and a preset three-dimensional face model comprises:
determining a local line drawing expansion diagram containing preset five sense organs in the line drawing expansion diagram;
and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and the preset three-dimensional face model.
6. The method of claim 5, wherein the generating a three-dimensional line drawing of the target face based on the local line drawing expansion map and the preset three-dimensional face model comprises:
updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color;
and combining the updated local line drawing expansion image with the preset three-dimensional face model to obtain the three-dimensional line drawing of the target face.
7. An apparatus for generating an image, comprising:
an acquisition unit configured to acquire images respectively photographed from at least two angles with respect to a target face;
the style migration unit is configured to perform style migration on the acquired image to obtain a two-dimensional line drawing of the target face;
a first generation unit configured to fuse the respective two-dimensional line drawings and generate a line drawing expansion diagram corresponding to a result of the fusion;
and the second generation unit is configured to generate a three-dimensional line drawing of the target human face based on the line drawing expansion diagram and a preset three-dimensional human face model.
8. The apparatus of claim 7, wherein the style migration unit is further configured to perform the style migration on the acquired image to obtain the two-dimensional line drawing of the target face as follows:
and inputting the acquired image into a pre-trained generation confrontation network to obtain a two-dimensional line drawing of the target face output from the generation confrontation network.
9. The apparatus of claim 8, wherein the generating a countermeasure network comprises a generator comprising a preset number of sub-generators, for each preset part in an image containing a face, there being a sub-generator for processing the preset part;
the style migration unit is further configured to perform the inputting of the acquired image into a pre-trained generated confrontation network, resulting in a two-dimensional line drawing of the target face output from the generated confrontation network, as follows:
segmenting the acquired image to obtain preset parts of the preset number in the acquired image;
for each obtained preset part, inputting the preset part into the sub-generator for processing the preset part to obtain a local two-dimensional line drawing corresponding to the preset part, wherein in the training process of generating the countermeasure network, each sub-generator learns the characteristics of the preset part corresponding to the sub-generator;
and fusing the obtained local two-dimensional line drawings to generate the two-dimensional line drawing of the target human face output from the generated confrontation network.
10. The apparatus of claim 8, wherein the generating a countermeasure network comprises a generator and an arbiter;
the generation countermeasure network is trained by the following method:
inputting a face image in a training sample into the generator to obtain a two-dimensional line drawing which is output from the generator and corresponds to the face image, wherein the training sample further comprises a reference two-dimensional line drawing which corresponds to the face image;
inputting the output two-dimensional line tracing image and the reference two-dimensional line tracing image into the discriminator, and judging whether the output two-dimensional line tracing image and the reference two-dimensional line tracing image are the same type of image or not by using the discriminator;
if the output two-dimensional line tracing image and the reference two-dimensional line tracing image are judged not to be the same kind of image, training the generator based on the output two-dimensional line tracing image and the reference two-dimensional line tracing image;
and if the output two-dimensional line drawing and the reference two-dimensional line drawing are judged to be the same kind of images, taking the current generated confrontation network as the trained generated confrontation network.
11. The apparatus of claim 7, wherein the second generating unit is further configured to perform the generating of the three-dimensional line drawing of the target face based on the line drawing expansion map and a preset three-dimensional face model as follows:
determining a local line drawing expansion diagram containing preset five sense organs in the line drawing expansion diagram;
and generating a three-dimensional line drawing of the target face based on the local line drawing expansion image and the preset three-dimensional face model.
12. The apparatus of claim 11, wherein the second generating unit is further configured to perform the generating of the three-dimensional line drawing of the target face based on the local line drawing expansion map and the preset three-dimensional face model as follows:
updating the background color of the local line-drawing expansion image to a preset color, wherein the background color of the local line-drawing expansion image is different from the preset color;
and combining the updated local line drawing expansion image with the preset three-dimensional face model to obtain the three-dimensional line drawing of the target face.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112247.8A CN111340865B (en) | 2020-02-24 | 2020-02-24 | Method and apparatus for generating image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112247.8A CN111340865B (en) | 2020-02-24 | 2020-02-24 | Method and apparatus for generating image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340865A true CN111340865A (en) | 2020-06-26 |
CN111340865B CN111340865B (en) | 2023-04-07 |
Family
ID=71185516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010112247.8A Active CN111340865B (en) | 2020-02-24 | 2020-02-24 | Method and apparatus for generating image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340865B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330534A (en) * | 2020-11-13 | 2021-02-05 | 北京字跳网络技术有限公司 | Animal face style image generation method, model training method, device and equipment |
CN113538218A (en) * | 2021-07-14 | 2021-10-22 | 浙江大学 | Weak pairing image style migration method based on pose self-supervision countermeasure generation network |
CN113873177A (en) * | 2020-06-30 | 2021-12-31 | 北京小米移动软件有限公司 | Multi-view shooting method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004362441A (en) * | 2003-06-06 | 2004-12-24 | National Printing Bureau | Method and apparatus for simulating printed matter using two-dimensional data |
US20080297503A1 (en) * | 2007-05-30 | 2008-12-04 | John Dickinson | System and method for reconstructing a 3D solid model from a 2D line drawing |
CN108510437A (en) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing |
CN109308681A (en) * | 2018-09-29 | 2019-02-05 | 北京字节跳动网络技术有限公司 | Image processing method and device |
US20190228587A1 (en) * | 2018-01-24 | 2019-07-25 | Google Llc | Image Style Transfer for Three-Dimensional Models |
CN110580677A (en) * | 2018-06-08 | 2019-12-17 | 北京搜狗科技发展有限公司 | Data processing method and device and data processing device |
-
2020
- 2020-02-24 CN CN202010112247.8A patent/CN111340865B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004362441A (en) * | 2003-06-06 | 2004-12-24 | National Printing Bureau | Method and apparatus for simulating printed matter using two-dimensional data |
US20080297503A1 (en) * | 2007-05-30 | 2008-12-04 | John Dickinson | System and method for reconstructing a 3D solid model from a 2D line drawing |
US20190228587A1 (en) * | 2018-01-24 | 2019-07-25 | Google Llc | Image Style Transfer for Three-Dimensional Models |
CN108510437A (en) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing |
CN110580677A (en) * | 2018-06-08 | 2019-12-17 | 北京搜狗科技发展有限公司 | Data processing method and device and data processing device |
CN109308681A (en) * | 2018-09-29 | 2019-02-05 | 北京字节跳动网络技术有限公司 | Image processing method and device |
Non-Patent Citations (1)
Title |
---|
宁宁;金鑫;张晓昆;李艳楠;: "基于GAN的人脸图像光照迁移" * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113873177A (en) * | 2020-06-30 | 2021-12-31 | 北京小米移动软件有限公司 | Multi-view shooting method and device, electronic equipment and storage medium |
CN112330534A (en) * | 2020-11-13 | 2021-02-05 | 北京字跳网络技术有限公司 | Animal face style image generation method, model training method, device and equipment |
CN113538218A (en) * | 2021-07-14 | 2021-10-22 | 浙江大学 | Weak pairing image style migration method based on pose self-supervision countermeasure generation network |
Also Published As
Publication number | Publication date |
---|---|
CN111340865B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110189340B (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN111340865B (en) | Method and apparatus for generating image | |
CN111476871B (en) | Method and device for generating video | |
US11455765B2 (en) | Method and apparatus for generating virtual avatar | |
CN112954450B (en) | Video processing method and device, electronic equipment and storage medium | |
CN110827379A (en) | Virtual image generation method, device, terminal and storage medium | |
CN109754464B (en) | Method and apparatus for generating information | |
CN108363995A (en) | Method and apparatus for generating data | |
CN110059624B (en) | Method and apparatus for detecting living body | |
CN110059623B (en) | Method and apparatus for generating information | |
CN112330527A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
CN112308977B (en) | Video processing method, video processing device, and storage medium | |
CN111862349A (en) | Virtual brush implementation method and device and computer readable storage medium | |
CN114842120B (en) | Image rendering processing method, device, equipment and medium | |
CN115311178A (en) | Image splicing method, device, equipment and medium | |
CN111967397A (en) | Face image processing method and device, storage medium and electronic equipment | |
CN108597034A (en) | Method and apparatus for generating information | |
CN111314620A (en) | Photographing method and apparatus | |
CN112991208B (en) | Image processing method and device, computer readable medium and electronic equipment | |
CN111767456A (en) | Method and device for pushing information | |
CN110084306B (en) | Method and apparatus for generating dynamic image | |
CN109816791B (en) | Method and apparatus for generating information | |
CN111314627B (en) | Method and apparatus for processing video frames | |
CN108256477B (en) | Method and device for detecting human face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |