CN113223103A - Method, device, electronic device and medium for generating sketch - Google Patents

Method, device, electronic device and medium for generating sketch Download PDF

Info

Publication number
CN113223103A
CN113223103A CN202110142946.1A CN202110142946A CN113223103A CN 113223103 A CN113223103 A CN 113223103A CN 202110142946 A CN202110142946 A CN 202110142946A CN 113223103 A CN113223103 A CN 113223103A
Authority
CN
China
Prior art keywords
image
sketch
portrait
face
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110142946.1A
Other languages
Chinese (zh)
Inventor
高飞
尚梅梅
朱静洁
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Miaoji Technology Co ltd
Original Assignee
Hangzhou Miaoji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Miaoji Technology Co ltd filed Critical Hangzhou Miaoji Technology Co ltd
Priority to CN202110142946.1A priority Critical patent/CN113223103A/en
Publication of CN113223103A publication Critical patent/CN113223103A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a sketch generation method, a sketch generation device, electronic equipment and a sketch generation medium. In the method, an initial sketch image and a portrait image can be obtained, wherein the initial sketch image and the portrait image comprise face images of target users; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating target pixel portrayal of the face image based on the first simple pen image and the second simple pen image. By applying the technical scheme of the application, reasonable and vivid portrait can be generated by using different facial stroke types, so that the problem that the generation of simple strokes is not practical enough under the condition of unpaired data sets is solved.

Description

Method, device, electronic device and medium for generating sketch
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for generating a sketch.
Background
The face-to-sketch is a sketch portrait generated based on a face photo, and is widely applied to the aspects of electronic entertainment and the like.
Further, the advent of anti-generation networks has made a breakthrough in this area. Existing work regards face sketch synthesis as a paired image-to-image conversion task, learning a mapping from photo to sketch using photo sketch pairs in existing datasets. Since paired datasets are difficult to obtain, it is important to learn the face sketch generation method from unpaired datasets. The current unpaired face sketch conversion method mainly comprises an anti-generation network and neural style conversion, but both methods cannot generate a vivid user face sketch image.
Disclosure of Invention
The embodiment of the present application provides a method, an apparatus, an electronic device, and a medium for generating a sketch, wherein according to an aspect of the embodiment of the present application, the method for generating the sketch is provided, and includes:
acquiring an initial sketch image and a portrait image, wherein the initial sketch image and the portrait image comprise face images of target users, and the initial sketch image corresponds to an image with a preset style;
generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image;
and generating a target pixel drawing of the face image based on the first simple pen image and the second simple pen image.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the initial sketch image and the portrait image, the method further includes:
recognizing the initial sketch image and the portrait image by using a preset image prediction model to obtain the coordinates of facial organs of the face image, wherein the facial organs comprise coordinates corresponding to the centers of the left eye and the right eye, the nose tip and the mouth corner;
and carrying out affine transformation operation on the coordinates of the center positions of the left eye and the right eye in the coordinates of the facial organs to obtain an initial sketch image and a portrait image after face alignment processing.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the initial sketch image and the portrait image after the face alignment processing, the method further includes:
inputting the initial sketch image after the face alignment processing into a first sketch to generate a network model, and obtaining a first sketch image;
and inputting the portrait image after the face alignment processing into a second simple pen to generate a network model, and obtaining the second simple pen image.
Optionally, in another embodiment based on the foregoing method of the present application, the inputting the portrait image after the face alignment processing into a second simple pen drawing to generate a network model to obtain the second simple pen drawing includes:
inputting the portrait image after the face alignment processing into a second simple stroke to generate a network model, and obtaining a portrait simple stroke gray level image;
carrying out binarization processing on the portrait simple stroke gray scale map to obtain binarized simple strokes;
and generating a sketch image in at least one style by using the binary simple pen strokes and the portrait simple pen gray level image.
Optionally, in another embodiment based on the above method of the present application, the generating a sketch image of at least one style by using the binarized simple pen stroke and the portrait simple pen gray scale map includes:
inputting the binaryzation sketch strokes into a sketch-to-sketch model to generate a high-definition sketch image of a first style;
and inputting the portrait sketch gray level image into the sketch-to-sketch model to generate a second style high-definition sketch image.
Optionally, in another embodiment based on the foregoing method of the present application, the generating a target pixel depiction of the face image based on the second simple pen image includes:
inputting the first simple pen image into a sketch generation network to generate a first sketch image;
inputting the first sketch image into a BiSeNet network to obtain a face analysis image, wherein the face analysis image corresponds to a background region, a face region, an eyebrow region, an eye region, a mouth region, a hair region and a boundary region of a face;
and sequentially inputting each region of the face analysis image into a stroke classification network, and constraining the first sketch image by using a preset loss function to generate a target pixel depiction of the face image.
According to another aspect of the embodiments of the present application, there is provided an apparatus for generating sketch, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire an initial sketch image and a portrait image, and the initial sketch image and the portrait image comprise a face image of a target user;
a first generation module configured to generate a first simple pen image of the target user using the initial sketch image and generate a second simple pen image of the target user using the portrait image;
a second generation module configured to generate a target pixel depiction of the face image based on the first and second simple pen images.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the sketch generation methods described above.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the methods for sketch generation.
In the method, an initial sketch image and a portrait image can be obtained, wherein the initial sketch image and the portrait image comprise face images of target users; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating target pixel portrayal of the face image based on the first simple pen image and the second simple pen image. By applying the technical scheme of the application, reasonable and vivid portrait can be generated by using different facial stroke types, so that the problem that the generation of simple strokes is not practical enough under the condition of unpaired data sets is solved.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a sketch generation method proposed in the present application;
FIG. 2 is a system architecture diagram generated from a sketch as proposed in the present application;
FIG. 3 is a schematic diagram of an electronic device for a method of generating a depiction of an application;
fig. 4 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for sketch generation according to an exemplary embodiment of the present application is described below in connection with fig. 1-2. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The application also provides a sketch generation method, a sketch generation device, a target terminal and a medium.
Fig. 1 schematically shows a flow diagram of a sketch generation method according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, acquiring an initial sketch image and a portrait image, wherein the initial sketch image and the portrait image comprise face images of target users, and the initial sketch image corresponds to an image with a preset style;
s102, generating a first simple pen image of a target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image;
and S103, generating target pixel portrayal of the face image based on the first simple pen image and the second simple pen image.
In the method, an initial sketch image and a portrait image can be obtained, wherein the initial sketch image and the portrait image comprise face images of target users; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating target pixel portrayal of the face image based on the first simple pen image and the second simple pen image. By applying the technical scheme of the application, reasonable and vivid portrait can be generated by using different facial stroke types, so that the problem that the generation of simple strokes is not practical enough under the condition of unpaired data sets is solved.
Optionally, in a possible implementation manner of the present application, after the obtaining the initial sketch image and the portrait image, the method further includes:
recognizing the initial sketch image and the portrait image by using a preset image prediction model to obtain facial organ coordinates of the face image, wherein the facial organs comprise coordinates corresponding to the centers of the left eye and the right eye, the nose tip and the mouth corner;
and carrying out affine transformation operation on the coordinates of the center positions of the left eye and the right eye in the coordinates of the facial organs to obtain an initial sketch image and a portrait image after face alignment processing.
Further, as shown in fig. 2, for the architecture diagram of the sketch generating method provided by the present application, after the initial sketch image and the portrait image are obtained by the present application, a face block diagram and key point detection may be performed on a given initial sketch image and a given portrait photo through a face key point prediction model, and finally, face bounding box information of the initial sketch image and the portrait photo and corresponding position coordinates of five key points (left and right eye centers, nose tip, and mouth corner) are obtained.
Furthermore, the method and the device can also be used for aligning the human faces through affine transformation operation on the position coordinates of the centers of the left eye and the right eye in the key points of the human faces. Specifically, the horizontal deviation angle of the centers of the two eyes can be calculated through the value of the longitudinal axis coordinate, and the image is rotated to keep the centers of the two eyes horizontal. In addition, the distance between the two eyes can be kept fixed through scaling. In one mode, the application can be divided into twoThe eye distance is set to 150, and finally the aligned portrait photo S is obtainedH×W×CWherein H, W and C are the height, width and channel number of the photo respectively.
Optionally, in a possible implementation manner of the present application, after obtaining the initial sketch image and the portrait image after the face alignment processing, the method further includes:
inputting the initial sketch image after the face alignment processing into a first sketch to generate a network model, and obtaining a first sketch image;
and inputting the portrait image after the face alignment processing into a second simple pen to generate a network model, and obtaining a second simple pen image.
Optionally, in a possible implementation manner of the present application, inputting the portrait image after the face alignment processing into a second simple pen drawing generation network model to obtain a second simple pen drawing, where the method includes:
inputting the portrait image after the face alignment processing into a second simple stroke to generate a network model, and obtaining a portrait simple stroke gray level image;
carrying out binarization processing on the portrait simple stroke gray level image to obtain a binarized simple stroke;
and generating a sketch image in at least one style by using the binary simple pen strokes and the portrait simple pen gray level image.
Optionally, in a possible implementation manner of the present application, generating a sketch image of at least one style by using a binarized simple pen and a portrait simple pen grayscale map includes:
inputting the binaryzation sketch to a sketch-to-sketch model to generate a high-definition sketch image of a first style;
and inputting the portrait simple pen gray level image into a simple pen to sketch model to generate a high-definition sketch image of a second style.
Furthermore, AdaIN can be used as a generator and consists of an encoder, a self-adaptive instantiation module and a decoder, and the model parameters of the encoder are pre-trained VGG-face model parameters. And taking the initial sketch image after the face alignment processing as a content graph, and inputting data with the style of the simple strokes in the database as a style graph into a pre-trained simple stroke generation network to obtain a first simple stroke image corresponding to the sketch image.
In addition, the portrait photo after the face alignment processing is input into a simple stroke generation network to generate simple stroke images of different styles. The aligned portrait photo can be used as a content graph, and the sketch pictures can be used as a style graph to be input into a sketch generation network to generate a portrait sketch gray scale graph. And carrying out binarization processing on the simplified strokes so as to generate a high-definition sketch map of another style. The simple stroke image is subjected to an average filtering operation to blur the image, that is, the central pixel value in each square area is the average value of the summation of the pixel values of the square areas. The present invention sizes the square area to 3 x 3. After the blurred image is subjected to a Sigmoid function, the pixel range is mapped to 0, a binarization effect can be well achieved between 1, and the formula is as follows:
Figure RE-GDA0003087009820000071
where e denotes a natural constant, and x denotes a blurred image.
Optionally, in a possible implementation manner of the present application, generating a target pixel drawing of a face image based on a second simple pen image includes:
inputting the first simple pen image into a sketch generation network to generate a first sketch image;
inputting the first sketch image into a BiSeNet network to obtain a face analysis image, wherein the face analysis image corresponds to a background region, a face region, an eyebrow region, an eye region, a mouth region, a hair region and a boundary region of a face;
and sequentially inputting each region of the face analysis image into the stroke classification network, and constraining the first sketch image by using a preset loss function to generate a target pixel sketch of the face image.
Furthermore, a sketch generation network in the application can adopt pix2pixHD and consists of a generator and a discriminator, wherein the generator comprises a down-sampling layer, a ResBlock and an up-sampling layer; the generator is responsible for generating as realistic a photograph as possible and the discriminator is responsible for distinguishing the generated photograph from the real photograph. And inputting the simplified strokes into a sketch generation network to generate corresponding high-definition sketches.
Furthermore, the high-definition sketch generated by the loss function constraint can be utilized, so that the brush strokes and the textures are more vivid. Specifically, since the painter uses different strokes for different facial components when drawing, for example, smooth continuous long lines are generally used for hair and facial contours, fine short lines are generally used for eyebrows and lips, and further, the fine detail gradient of light shadow is represented by flat coating and overlapped coating. In order to achieve the visual effect, the method can firstly divide the brush strokes into 7 types (namely each area of the face analysis image) according to the face area according to the sketch: background, face, eyebrows, eyes, mouth, hair, borders. And then, the whole picture is input into a stroke classification network according to the face region cutting block. The stroke classification network adopts DenseNet:
Figure RE-GDA0003087009820000081
wherein:
Figure RE-GDA0003087009820000082
representing the jth layer of the stroke classification network.
Still further, in order to make the generated picture and the real picture more similar in detail, we count their respective pixel moments and calculate the loss between the pixel moments, making the generated picture closer to the real picture by the constraint of the loss function. The pixel moment is a simple and effective pixel feature representation method, and has a first moment (mean), a second moment (variance) and a third moment (slope).
The second moment reflects the non-uniformity, i.e., variance, of the pixels in the region to be measured. Is calculated by the formula
Figure 2
Figure RE-GDA0003087009820000091
Wherein N is the total number of picture pixels, PijThe pixel value for each point on the picture. The second moment loss is defined as the true sketch second moment σ (y)i) And generating a sketch second moment sigma (G (y)i) Euclidean distance between), i.e.:
Figure RE-GDA0003087009820000092
d2-3. the third moment defines the skewness of the pixel components, i.e., the asymmetry of the pixel. Is calculated by the formula
Figure RE-GDA0003087009820000093
The third moment loss is defined as the true sketch third moment xi (y)i) And generating a sketch third moment xi (G (y)i) Euclidean distance between), i.e.:
Figure RE-GDA0003087009820000094
the method can be used for solving the problems of stroke loss, second moment loss, third moment loss and loss functions (including countermeasure loss) of pix2pixHD
Figure RE-GDA0003087009820000095
And feature matching loss
Figure RE-GDA0003087009820000096
) The combination constitutes the overall loss function, expressed as follows:
Figure RE-GDA0003087009820000097
wherein λ isiI is 1, 2, 3; 4 are the weighting coefficients of the respective loss functions.
In another embodiment of the present application, as shown in fig. 3, the present application further provides a sketch generating apparatus. The system comprises an acquisition module 201, a first generation module 202 and a second generation module 203, wherein,
an obtaining module 201, configured to obtain an initial sketch image and a portrait image, where the initial sketch image and the portrait image include a face image of a target user;
a first generating module 202 configured to generate a first simple pen image of the target user using the initial sketch image and a second simple pen image of the target user using the portrait image;
a second generating module 203 configured to generate a target pixel drawing of the face image based on the first and second simple pen images.
In the method, an initial sketch image and a portrait image can be obtained, wherein the initial sketch image and the portrait image comprise face images of target users; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating target pixel portrayal of the face image based on the first simple pen image and the second simple pen image. By applying the technical scheme of the application, reasonable and vivid portrait can be generated by using different facial stroke types, so that the problem that the generation of simple strokes is not practical enough under the condition of unpaired data sets is solved.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to recognize the initial sketch image and the portrait image by using a preset image prediction model, and obtain coordinates of facial organs of the face image, where the facial organs include coordinates corresponding to centers of left and right eyes, a nose tip, and a mouth corner;
the obtaining module 201 is configured to perform affine transformation on the coordinates of the center positions of the left and right eyes in the coordinates of the facial organ to obtain an initial sketch image and a portrait image after face alignment processing.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to input the initial sketch image after the face alignment processing into a first sketch generation network model, so as to obtain the first sketch image;
the obtaining module 201 is configured to input the portrait image after the face alignment processing into a second simple pen to generate a network model, so as to obtain the second simple pen image.
In another embodiment of the present application, the obtaining module 201 further includes:
the acquisition module 201 is configured to input the portrait image after the face alignment processing into a second simple pen to generate a network model, so as to obtain a portrait simple pen gray scale image;
an obtaining module 201, configured to perform binarization processing on the portrait simple pen gray level map to obtain a binarized simple pen;
and the acquisition module 201 is configured to generate a sketch image of at least one style by using the binarized simple pen and the portrait simple pen gray map.
In another embodiment of the present application, the obtaining module 201 further includes:
the acquisition module 201 is configured to input the binarized sketch to a sketch-to-sketch model to generate a high-definition sketch image of a first style;
and the acquisition module 201 is configured to input the portrait simple pen gray scale map into the simple pen to sketch model, and generate a second style high-definition sketch image.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to input the first simple pen image into a sketch generating network, and generate a first sketch image;
an obtaining module 201, configured to input the first sketch image into a BiSeNet network, so as to obtain a face analysis image, where the face analysis image corresponds to a background region, a face region, an eyebrow region, an eye region, a mouth region, a hair region, and a boundary region of a face;
the obtaining module 201 is configured to sequentially input each region of the face analysis image to a stroke classification network, and constrain the first sketch image by using a preset loss function to generate a target sketch of the face image.
Fig. 4 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, such as a memory, including instructions executable by an electronic device processor to perform a method of the above sketch generation, the method comprising: acquiring an initial sketch image and a portrait image, wherein the initial sketch image and the portrait image comprise face images of target users, and the initial sketch image corresponds to an image with a preset style; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating a target pixel drawing of the face image based on the first simple pen image and the second simple pen image. Optionally, the instructions may also be executable by a processor of the electronic device to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product including one or more instructions executable by a processor of an electronic device to perform the method of sketch generation described above, the method comprising: acquiring an initial sketch image and a portrait image, wherein the initial sketch image and the portrait image comprise face images of target users, and the initial sketch image corresponds to an image with a preset style; generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image; and generating a target pixel drawing of the face image based on the first simple pen image and the second simple pen image. Optionally, the instructions may also be executable by a processor of the electronic device to perform other steps involved in the exemplary embodiments described above.
Fig. 4 is an exemplary diagram of the computer device 30. Those skilled in the art will appreciate that the schematic diagram 4 is merely an example of the computer device 30 and does not constitute a limitation of the computer device 30 and may include more or less components than those shown, or combine certain components, or different components, e.g., the computer device 30 may also include input output devices, network access devices, buses, etc.
The Processor 302 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor 302 may be any conventional processor or the like, the processor 302 being the control center for the computer device 30 and connecting the various parts of the overall computer device 30 using various interfaces and lines.
Memory 301 may be used to store computer readable instructions 303 and processor 302 may implement various functions of computer device 30 by executing or executing computer readable instructions or modules stored within memory 301 and by invoking data stored within memory 301. The memory 301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer device 30, and the like. In addition, the Memory 301 may include a hard disk, a Memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Memory Card (Flash Card), at least one disk storage device, a Flash Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), or other non-volatile/volatile storage devices.
The modules integrated by the computer device 30 may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by hardware related to computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the method embodiments may be implemented.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A method of sketch generation, comprising:
acquiring an initial sketch image and a portrait image, wherein the initial sketch image and the portrait image comprise face images of target users, and the initial sketch image corresponds to an image with a preset style;
generating a first simple pen image of the target user by using the initial sketch image, and generating a second simple pen image of the target user by using the portrait image;
and generating a target pixel drawing of the face image based on the first simple pen image and the second simple pen image.
2. The method of claim 1, wherein after acquiring the initial sketch image and the portrait image, further comprising:
recognizing the initial sketch image and the portrait image by using a preset image prediction model to obtain the coordinates of facial organs of the face image, wherein the facial organs comprise coordinates corresponding to the centers of the left eye and the right eye, the nose tip and the mouth corner;
and carrying out affine transformation operation on the coordinates of the center positions of the left eye and the right eye in the coordinates of the facial organs to obtain an initial sketch image and a portrait image after face alignment processing.
3. The method of claim 2, wherein after obtaining the initial sketch image and the portrait image after the face alignment processing, further comprising:
inputting the initial sketch image after the face alignment processing into a first sketch to generate a network model, and obtaining a first sketch image;
and inputting the portrait image after the face alignment processing into a second simple pen to generate a network model, and obtaining the second simple pen image.
4. The method of claim 3, wherein the inputting the portrait image after the face alignment process into a second simple stroke generation network model to obtain the second simple stroke image comprises:
inputting the portrait image after the face alignment processing into a second simple stroke to generate a network model, and obtaining a portrait simple stroke gray level image;
carrying out binarization processing on the portrait simple stroke gray scale map to obtain binarized simple strokes;
and generating a sketch image in at least one style by using the binary simple pen strokes and the portrait simple pen gray level image.
5. The method of claim 4, wherein generating at least one style of sketch image using the binarized sketching profile and the portrait sketching profile comprises:
inputting the binaryzation sketch strokes into a sketch-to-sketch model to generate a high-definition sketch image of a first style;
and inputting the portrait sketch gray level image into the sketch-to-sketch model to generate a second style high-definition sketch image.
6. The method of claim 4, wherein generating the target pixel delineation of the face image based on the second simple pen image comprises:
inputting the first simple pen image into a sketch generation network to generate a first sketch image;
inputting the first sketch image into a BiSeNet network to obtain a face analysis image, wherein the face analysis image corresponds to a background region, a face region, an eyebrow region, an eye region, a mouth region, a hair region and a boundary region of a face;
and sequentially inputting each region of the face analysis image into a stroke classification network, and constraining the first sketch image by using a preset loss function to generate a target pixel depiction of the face image.
7. An apparatus for sketch generation, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire an initial sketch image and a portrait image, and the initial sketch image and the portrait image comprise a face image of a target user;
a first generation module configured to generate a first simple pen image of the target user using the initial sketch image and generate a second simple pen image of the target user using the portrait image;
a second generation module configured to generate a target pixel depiction of the face image based on the first and second simple pen images.
8. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of sketch generation as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium storing computer-readable instructions that, when executed, perform operations of a method of sketch generation as recited in any one of claims 1-6.
CN202110142946.1A 2021-02-02 2021-02-02 Method, device, electronic device and medium for generating sketch Pending CN113223103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110142946.1A CN113223103A (en) 2021-02-02 2021-02-02 Method, device, electronic device and medium for generating sketch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110142946.1A CN113223103A (en) 2021-02-02 2021-02-02 Method, device, electronic device and medium for generating sketch

Publications (1)

Publication Number Publication Date
CN113223103A true CN113223103A (en) 2021-08-06

Family

ID=77084564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110142946.1A Pending CN113223103A (en) 2021-02-02 2021-02-02 Method, device, electronic device and medium for generating sketch

Country Status (1)

Country Link
CN (1) CN113223103A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363160A (en) * 2023-05-30 2023-06-30 杭州脉流科技有限公司 CT perfusion image brain tissue segmentation method and computer equipment based on level set

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945244A (en) * 2017-12-29 2018-04-20 哈尔滨拓思科技有限公司 A kind of simple picture generation method based on human face photo
CN110222588A (en) * 2019-05-15 2019-09-10 合肥进毅智能技术有限公司 A kind of human face sketch image aging synthetic method, device and storage medium
CN111243051A (en) * 2020-01-08 2020-06-05 浙江省北大信息技术高等研究院 Portrait photo-based stroke generating method, system and storage medium
CN111243050A (en) * 2020-01-08 2020-06-05 浙江省北大信息技术高等研究院 Portrait simple stroke generation method and system and drawing robot
CN111508048A (en) * 2020-05-22 2020-08-07 南京大学 Automatic generation method for human face cartoon with interactive arbitrary deformation style
US20200371676A1 (en) * 2015-06-07 2020-11-26 Apple Inc. Device, Method, and Graphical User Interface for Providing and Interacting with a Virtual Drawing Aid

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200371676A1 (en) * 2015-06-07 2020-11-26 Apple Inc. Device, Method, and Graphical User Interface for Providing and Interacting with a Virtual Drawing Aid
CN107945244A (en) * 2017-12-29 2018-04-20 哈尔滨拓思科技有限公司 A kind of simple picture generation method based on human face photo
CN110222588A (en) * 2019-05-15 2019-09-10 合肥进毅智能技术有限公司 A kind of human face sketch image aging synthetic method, device and storage medium
CN111243051A (en) * 2020-01-08 2020-06-05 浙江省北大信息技术高等研究院 Portrait photo-based stroke generating method, system and storage medium
CN111243050A (en) * 2020-01-08 2020-06-05 浙江省北大信息技术高等研究院 Portrait simple stroke generation method and system and drawing robot
CN111508048A (en) * 2020-05-22 2020-08-07 南京大学 Automatic generation method for human face cartoon with interactive arbitrary deformation style

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEI GAO 等: "Bridging Unpaired Facial Photos And Sketches By Line-drawings", 《ARXIV:2102.00635V1》 *
朱静洁: "基于深度学习的图像风格转换方法", 《万方数据库》 *
王铖: "基于改进PatchMatch的图像迁移算法研究与应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363160A (en) * 2023-05-30 2023-06-30 杭州脉流科技有限公司 CT perfusion image brain tissue segmentation method and computer equipment based on level set
CN116363160B (en) * 2023-05-30 2023-08-29 杭州脉流科技有限公司 CT perfusion image brain tissue segmentation method and computer equipment based on level set

Similar Documents

Publication Publication Date Title
CN110136243B (en) Three-dimensional face reconstruction method, system, device and storage medium thereof
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN109952594B (en) Image processing method, device, terminal and storage medium
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN111243050B (en) Portrait simple drawing figure generation method and system and painting robot
WO2021253788A1 (en) Three-dimensional human body model construction method and apparatus
US10891789B2 (en) Method to produce 3D model from one or several images
CN111767760A (en) Living body detection method and apparatus, electronic device, and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
JP2018055470A (en) Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
CN111460937B (en) Facial feature point positioning method and device, terminal equipment and storage medium
CN104537705A (en) Augmented reality based mobile platform three-dimensional biomolecule display system and method
CN111243051A (en) Portrait photo-based stroke generating method, system and storage medium
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
CN108537162A (en) The determination method and apparatus of human body attitude
CN108549484B (en) Man-machine interaction method and device based on human body dynamic posture
CN113223103A (en) Method, device, electronic device and medium for generating sketch
CN111275610B (en) Face aging image processing method and system
CN112507766B (en) Face image extraction method, storage medium and terminal equipment
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
CN112712460A (en) Portrait generation method and device, electronic equipment and medium
CN109460690A (en) A kind of method and apparatus for pattern-recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210806

RJ01 Rejection of invention patent application after publication