WO2022257677A1 - 图像处理方法、装置、设备及存储介质 - Google Patents

图像处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022257677A1
WO2022257677A1 PCT/CN2022/091681 CN2022091681W WO2022257677A1 WO 2022257677 A1 WO2022257677 A1 WO 2022257677A1 CN 2022091681 W CN2022091681 W CN 2022091681W WO 2022257677 A1 WO2022257677 A1 WO 2022257677A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
target
organ
processed
Prior art date
Application number
PCT/CN2022/091681
Other languages
English (en)
French (fr)
Inventor
刘礼杰
华淼
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/568,745 priority Critical patent/US20240273688A1/en
Publication of WO2022257677A1 publication Critical patent/WO2022257677A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, and in particular, to an image processing method, device, device, and storage medium.
  • an image processing method in one aspect including:
  • the smearing model is obtained based on the first facial image in which the target facial organ is not smeared and the second facial image obtained after the target facial organ in the first facial image is smeared, wherein,
  • the second facial image is generated based on a preset image generation model, and the image generation model is trained based on the target texture image and the target facial image.
  • the target texture image includes a skin image
  • the skin image is obtained by expanding the skin image on a target area in the first facial image, and the target area includes a forehead area.
  • the forehead area is determined based on eyebrow key points and forehead contour key points on the first facial image.
  • the target facial image is a mask of the target facial part determined based on the target facial part.
  • the mask of the target facial part is determined based on key points of the target facial part in the first facial image.
  • the expansion processing of the skin image on the target area includes:
  • a process of splicing the reflection image obtained by mirror reflection and the skin image on the target area is a process of splicing the reflection image obtained by mirror reflection and the skin image on the target area.
  • the expansion processing of the skin image on the target area includes:
  • the copying process is performed on the skin image on the target area, and the splicing process is performed on the copied multiple copied images.
  • the method further includes:
  • the smearing model obtained based on pre-training is used to smear the target facial organs in the facial image to be processed, and after obtaining the facial smear image corresponding to the facial image to be processed, the method also includes:
  • the second organ image is added to the facial smear image.
  • the smearing process is performed on the target facial organs in the facial image to be processed based on the pre-trained smearing model, and after obtaining the facial smearing image corresponding to the facial image to be processed, the method further includes :
  • Migrating the preset animation to the facial smear image to obtain a dynamic image Migrating the preset animation to the facial smear image to obtain a dynamic image.
  • an image processing device including:
  • An image acquisition unit configured to acquire a facial image to be processed
  • a smear processing unit configured to smear the target facial organs in the facial image to be processed based on a pre-trained smear model, to obtain a facial smear image corresponding to the facial image to be processed;
  • the smearing model is obtained based on the first facial image in which the target facial organ is not smeared and the second facial image obtained after the target facial organ in the first facial image is smeared, wherein,
  • the second facial image is generated based on a preset image generation model, and the image generation model is trained based on the target texture image and the target facial image.
  • the target texture image includes a skin image, and the skin image is obtained by expanding the skin image on the target area in the first facial image;
  • the target area includes the brow area.
  • the forehead area is determined based on eyebrow key points and forehead contour key points on the first facial image.
  • the target facial image is a mask of the target facial part determined based on the target facial part.
  • the mask of the target facial part is determined based on key points of the target facial part in the first facial image.
  • the expansion processing of the skin image on the target area includes:
  • a process of splicing the reflection image obtained by mirror reflection and the skin image on the target area is a process of splicing the reflection image obtained by mirror reflection and the skin image on the target area.
  • the expansion processing of the skin image on the target area includes:
  • the copying process is performed on the skin image on the target area, and the splicing process is performed on the copied multiple copied images.
  • the device also includes:
  • An organ image extraction unit configured to extract a first organ image corresponding to the target facial organ in the facial image to be processed
  • An organ image adjustment unit configured to adjust the shape and/or size of the target facial organ in the first organ image to obtain a second organ image
  • a first image adding unit configured to add the second organ image to the face painting image.
  • the device also includes:
  • the second image adding unit is used to transfer the preset animation to the face painting image to obtain a dynamic image.
  • the present disclosure provides an electronic device, including: a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the computer program described in the preceding item can be realized.
  • the present disclosure provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the preceding items is implemented.
  • the image processing method, device, device, and storage medium provided by the embodiments of the present disclosure obtain a facial image to be processed, and perform smear processing on target facial organs in the facial image to be processed based on a pre-trained smear model to obtain a facial smear image.
  • the solutions provided by the embodiments of the present disclosure can improve the interestingness of the image by smearing the target facial organs in the facial image to be processed, thereby improving user experience.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a facial image to be processed provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a facial smear image provided by an embodiment of the present disclosure
  • FIG. 4 is a facial image used to train an image generation model provided by an embodiment of the present disclosure
  • Fig. 5 is a flowchart of an image processing method provided by another embodiment of the present disclosure.
  • Fig. 6 is the facial painting image that adopts step S101-step S105 to determine;
  • Fig. 7 is a flow chart of an image processing method provided by some further embodiments of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • Fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
  • FIG. 1 is a flow chart of an image processing method provided by an embodiment of the present disclosure.
  • the facial image processing method shown in FIG. 1 can be executed by an electronic device capable of image processing.
  • an electronic device can be understood as an exemplary device such as a mobile phone, a tablet computer, a desktop computer, and an all-in-one machine.
  • the image processing method provided by the embodiment of the present disclosure includes steps S101-S102.
  • Step S101 Obtain a facial image to be processed.
  • the facial image to be processed may be a facial image of a person, or a facial image of an animal, which is not particularly limited in this embodiment of the present disclosure.
  • the electronic device can obtain the facial image to be processed in a preset manner. In the embodiment, it may not be limited to photographing, downloading and loading from local storage.
  • the facial image to be processed includes pixels of the face of the object and other scene pixels; for example, in one example, the facial image to be processed may include an image of the upper body of a person and a background image.
  • Step S102 Perform smearing processing on the target facial organs in the facial image to be processed based on the smearing model obtained in advance to obtain a facial smearing image corresponding to the facial image to be processed.
  • the smear model is a model used to smear the target facial organ in the facial image to be processed so as to change the pixel features of the area where the target facial organ is located.
  • the smear model After the face image to be processed is input to the smear model, the smear model identifies the pixel area of the target facial organ in the face image to be processed, and smears the area where the target facial organ is located, so that the area where the target facial organ is located is smeared with the target texture.
  • Fig. 2 is a facial image to be processed provided by an embodiment of the present disclosure.
  • the facial image to be processed is a facial image 201 of a person object 20 .
  • Fig. 3 is a schematic diagram of a face smear image provided by an embodiment of the present disclosure, and Fig. 3 is a face smear image obtained after processing by a smear model in Fig. 2 .
  • the target facial organs used for smearing in some embodiments of the present disclosure may include eyebrows, eyes, nose and mouth. and the mask of the mouth, that is, the T-shaped region 301 in FIG. 3 .
  • the T-shaped area 301 is smeared with the target texture.
  • the image processing method provided by the embodiment of the present disclosure uses a smear model to process the facial image to be processed to obtain a facial smear image. Integrating the image processing method embodied in the aforementioned steps S101 and S102 into a specific application program or software, after the electronic device installs the application program or software, it can smear the user's facial image to obtain the smeared facial image, realizing The face pixels of the image to be processed are changed, which improves the interest and user experience of the image.
  • the smear model is a model trained based on the first facial image and the second facial image, the target facial organs in the aforementioned first facial image are not smeared, and the aforementioned second facial image The image obtained by smearing the target facial organs in the internal image.
  • the first facial image and the corresponding second facial image constitute a sample image pair, and multiple sample image pairs can be used to train the so-called smear model in the present disclosure.
  • the parameters of the smear model are first randomly initialized.
  • the sample image pairs are then input into the initialized smear model, and the parameters of the smear model are trained and adjusted.
  • the test image is used to test the parameter-adjusted smear model. If the test results show that the trained model meets the preset accuracy requirements, the training of the smear model is completed.
  • the first facial image and the mask of the target facial organ may be input into a preset image generation model, and the second facial image for training the smear model is generated by the image generation model.
  • the image generation model can be trained based on the target texture image and the target facial image.
  • the target facial image can be interpreted as an exemplary mask of the target facial part determined based on the target facial part.
  • the mask of the target facial part can be understood as the region whose shape matches the target facial part.
  • Fig. 4 is a face image used for training an image generation model provided by an embodiment of the present disclosure. As shown in FIG. 4 , the dotted box 401 is the area corresponding to the mask of the target facial organ.
  • the mask of the target facial part may be determined based on key points of the target facial part in the first facial image.
  • the target facial organs are specifically exemplified as eyebrows, eyes, nose and mouth.
  • the key points determined based on the four target facial organs include at least the eyebrow key point 402 , the eye key point 403 , the mouth key point 404 and the nose key point 405 .
  • the area 401 indicated by the dotted frame can be determined based on the aforementioned key points, and this area is the area corresponding to the mask of the target facial organs.
  • the target texture image refers to an image used to fill the mask of the target facial organ.
  • the target texture image may be a skin image.
  • the target texture image can be obtained by expanding the skin image on the target area in the first facial image.
  • the target texture image may also be a pre-selected image with other texture characteristics, for example, a flesh-colored matte image, which is of course only an example and not an exclusive limitation.
  • the target area of the first facial image may be a forehead area in the first facial image. Using the skin image of the forehead area to obtain the target texture image can make the target texture image more smoothly connected to the area outside the mask in the first facial image.
  • the target area of the first facial image may also be specified as other areas in the first facial image.
  • the target area may also be exemplarily specified as a cheek area or a chin area.
  • this is only an example rather than an exclusive limitation.
  • the forehead region can be determined based on the eyebrow key points and the forehead contour key points in the first facial image.
  • the eyebrow key point can be understood as the key point at the junction of the upper edge of the eyebrow and the forehead area.
  • the key points of the forehead contour can be understood as the key points at the junction of the forehead area and the hairline. Because the pixel contrast is obvious at the junction of the eyebrows and the forehead area, the contrast between the forehead contour and the hairline junction is obvious, and the aforementioned junction has a specific curve feature, so in some embodiments, the eyebrow key points and the forehead contour key points can be Pixel-based contrast and specific curve feature determination.
  • the target texture image is obtained by performing expansion processing based on the skin image of the forehead area.
  • the method for expanding the skin image of the forehead region to obtain the target texture image includes at least the following methods.
  • the first method perform mirror reflection processing on the skin image of the forehead area to obtain a reflection image, and then stitch the reflection image obtained by the mirror reflection with the skin image in the forehead area to obtain a target texture image.
  • the two uppermost key points of the eyebrows can be used to determine the straight line as the mirror reflection surface to obtain the reflection image. Then the reflection image and the original forehead image are used to stitch together, and the void area with non-skin features is excluded to obtain the target texture image.
  • the second method the skin image of the forehead area is copied and processed to obtain multiple copied images. Subsequently, the obtained multiple duplicated images are spliced, and the void areas with non-skin features are excluded to obtain the target texture image.
  • Fig. 5 is a flowchart of an image processing method provided by other embodiments of the present disclosure. As shown in FIG. 5 , in some other embodiments of the present disclosure, steps S103 - S105 may also be included after the above step S101 . Here, only steps S103-S105 added in the image processing method are described, and steps S101 and S102 can be referred to above.
  • Step S103 Extract the first organ image corresponding to the target facial organ in the facial image to be processed.
  • a feature recognition algorithm may be used to determine key points or edge areas of the target facial organ, and then a constituency containing the target facial organ is determined based on the key points or edge area, and the image area defined by the constituency is used as the corresponding first An image of an organ.
  • Step S104 Adjust the shape and/or size of the target facial organ in the first organ image to obtain a second organ image.
  • Step S104 adjusts the shape or size of the target facial organ in the first organ image, which may include the following methods.
  • the first way Enlarge or reduce the target facial organ. For example, if the target facial part is the eyes, and the eyes in the first organ image are small, the eyes in the first organ image may be enlarged to obtain the second organ image.
  • the second method adjust the shape of the target facial organ. For example, if the target facial organ is the mouth, and the corners of the mouth are in a downward state, the shape of the mouth in the first organ image can be adjusted so that the corners of the mouth are changed from downward state to upward state, and the second organ image can be obtained.
  • steps S103 and S104 are independent of step S102, and steps S103-S104 may be executed in parallel with step S102, or may be executed before or after step S102.
  • Step S105 adding the second organ image to the face painting image.
  • Step S105 is executed after step S102 and step S104 are executed.
  • the placement position of the second organ image may be determined according to the position of the first organ image, and then the second organ image is added to the face painting image according to the aforementioned placement position.
  • Fig. 6 is the facial smear image determined by steps S101-S105.
  • the first organ images adjusted by steps S103 and S104 are eye images and mouth images. Specifically, the eyes in the eye image are reduced, the mouth in the mouth image is raised, and the adjusted eye image and mouth image are obtained. The adjusted eye image and mouth image are placed in the face smear image according to the original eye and mouth positions, and the facial image with changed expression is obtained as shown in Figure 6.
  • Fig. 7 is a flowchart of an image processing method provided by some further embodiments of the present disclosure.
  • the image processing method may further include step S106 in addition to the aforementioned steps S101 and S102 .
  • Step S106 is executed after step S102.
  • steps S101 and S102 can be referred to above.
  • Step S106 Migrate the preset animation to the face painting image to obtain a dynamic image.
  • Preset animations are pre-selected animations with facial expression movements.
  • the preset animation may be, for example, an animation of blinking eyes, an animation of snorting a nose, or an animation of opening a mouth and howling, but is not limited to the animations listed here.
  • the preset animation To migrate the preset animation to the facial smear image, first determine the position of the corresponding facial organ in the image to be processed according to the type of the preset animation, and then place the preset animation at the position of the corresponding facial organ. For example, if the preset animation is an eye-blinking animation, this animation can be placed at the corresponding position of the eyes in the facial image to obtain a dynamic image.
  • the preset animation is an eye-blinking animation
  • the preset animation is transferred to the face smear image to obtain a dynamic image, so that the face smear image has a dynamic effect, which can further improve the interest of the face smear image and enhance user experience.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present disclosure.
  • the processing apparatus may be understood as the above-mentioned electronic device or some functional modules in the above-mentioned electronic device.
  • the processing device 800 includes an image acquisition unit 801 and a smearing processing unit 802 .
  • the image acquisition unit 801 is used to acquire the facial image to be processed; the smear processing unit 802 is used to smear the target facial organs in the facial image to be processed based on the pre-trained smear model to obtain the facial smear corresponding to the facial image to be processed image.
  • the smear model is trained based on the first facial image in which the target facial organ is not smeared and the second facial image obtained after the target facial organ in the first facial image is smeared, wherein the second facial image is based on The preset image generation model is generated, and the image generation model is trained based on the target texture image and the target face image.
  • the target texture image includes a skin image
  • the skin image is obtained by expanding the skin image on the target area in the first facial image; the target area includes a forehead area.
  • the forehead area is determined based on eyebrow key points and forehead contour key points on the first facial image.
  • the target facial image may be a mask of the target facial part determined based on the target facial part; the mask of the target facial part may be determined based on key points of the target facial part in the first facial image owned.
  • the expansion processing of the skin image on the target area includes: performing mirror reflection processing on the skin image on the target area; splicing the reflection image obtained by mirror reflection with the skin image on the target area processing.
  • the expansion processing of the skin image on the target area includes: copying the skin image on the target area, and splicing the copied images.
  • the image processing device may further include an organ image extraction unit, an organ image adjustment unit, and a first image addition unit.
  • the organ image extraction unit is used to extract the first organ image corresponding to the target facial organ in the facial image to be processed; the organ image adjustment unit is used to adjust the shape and/or size of the target facial organ in the first organ image to obtain the second Organ image; the first image adding unit is used to add the second organ image to the face painting image.
  • the image processing device may further include a second image adding unit; the second image adding unit is configured to transfer the preset animation to the facial smear image to obtain a dynamic image.
  • the device provided in this embodiment can execute the method of any one of the embodiments in FIGS. 1-7 above, and its execution mode and beneficial effects are similar, and details are not repeated here.
  • An embodiment of the present disclosure also provides an electronic device, the electronic device includes a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, any of the above-mentioned embodiments in FIGS. 1-7 can be realized. Methods.
  • FIG. 9 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
  • the electronic device 900 may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), car mobile terminals such as terminals (for example, car navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 9 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 900 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 901, which may be randomly accessed according to a program stored in a read-only memory (ROM) 902 or loaded from a storage device 908.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM read-only memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 903 .
  • RAM 903 In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the processing device 901, ROM 902, and RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to the bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication means 909 may allow the electronic device 900 to perform wireless or wired communication with other devices to exchange data. While FIG. 9 shows electronic device 900 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 909, or from storage means 908, or from ROM 902.
  • the processing device 901 When the computer program is executed by the processing device 901, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future-developed network protocols such as HTTP (Hyper Text Transfer Protocol, Hypertext Transfer Protocol), and can communicate with any form or medium of digital Data communication (eg, communication network) interconnections.
  • HTTP Hyper Text Transfer Protocol
  • Examples of communication networks include local area networks ("LANs”), wide area networks ("WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the facial image to be processed; obtains the facial image to be processed based on the pre-trained smear model The target facial organ in the image is smeared to obtain a facial smear image corresponding to the facial image to be processed.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method of any one of the above-mentioned embodiments in FIGS. 1-7 can be implemented. Its execution method and beneficial effect are similar, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开涉及一种图像处理方法、装置、设备及存储介质,该方法通过预先训练得到的涂抹模型对待处理的面部图像中的目标面部器官进行涂抹处理,得到待处理的面部图像对应的面部涂抹图像。通过对待处理面部图像中的目标面部器官进行涂抹处理,生成对应的面部涂抹图像,可以提高图像的趣味性,继而提高用户体验。

Description

图像处理方法、装置、设备及存储介质
相关申请的交叉引用
本申请要求于2021年06月10日提交的、申请号为202110646606.2、发明名称为“图像处理方法、装置、设备及存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开实施例涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、设备及存储介质。
背景技术
相关技术中,用户可以通过视频、照片等方式记录生活,并上传到视频应用中供其它视频消费者进行观看。但是随着视频应用的发展,单纯的视频或图片分享,已经无法满足日益增长的用户需求,因此,如何对视频和图像进行处理,提高视频和图像的趣味性是当前亟需解决的技术问题。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开一方面提供了一种图像处理方法,包括:
获取待处理的面部图像;
基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹 图像;
所述涂抹模型是基于所述目标面部器官未被涂抹的第一面部图像和所述第一面部图像中的所述目标面部器官被涂抹后得到的第二面部图像训练得到的,其中,所述第二面部图像是基于预设的图像生成模型生成得到的,所述图像生成模型是基于目标纹理图像和目标面部图像训练得到的。
可选地,所述目标纹理图像包括皮肤图像,所述皮肤图像是通过对所述第一面部图像中的目标区域上的皮肤图像进行扩充处理得到的,所述目标区域包括额头区域。
可选地,所述额头区域是基于所述第一面部图像上的眉毛关键点和额头轮廓关键点确定得到的。
可选地,所述目标面部图像是基于目标面部器官确定得到的目标面部器官的掩膜。
可选地,所述目标面部器官的掩膜是基于所述目标面部器官在所述第一面部图像中的关键点确定得到的。
可选地,对所述目标区域上的皮肤图像进行的扩充处理包括:
对所述目标区域上的皮肤图像进行的镜像反射处理;
将镜像反射得到的反射图像与所述目标区域上的皮肤图像进行拼接的处理。
可选地,对所述目标区域上的皮肤图像进行的扩充处理包括:
对所述目标区域上的皮肤图像进行的复制处理,以及对复制得到的多张复制图像进行的拼接处理。
可选地,所述获取待处理的面部图像之后,所述方法还包括:
提取所述目标面部器官在所述待处理的面部图像中对应的第一器官图像;
调整所述第一器官图像中的所述目标面部器官的形状和/或尺寸,得到第二器官图像;
所述基于预先训练得到的涂抹模型对所述待处理的面部图像中的 目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像之后,所述方法还包括:
将所述第二器官图像添加到所述面部涂抹图像上。
可选地,所述基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像之后,所述方法还包括:
将预设动画迁移到所述面部涂抹图像上,得到动态图像。
另一方面,本公开提供一种图像处理装置,包括:
图像获取单元,用于获取待处理的面部图像;
涂抹处理单元,用于基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像;
所述涂抹模型是基于所述目标面部器官未被涂抹的第一面部图像和所述第一面部图像中的所述目标面部器官被涂抹后得到的第二面部图像训练得到的,其中,所述第二面部图像是基于预设的图像生成模型生成得到的,所述图像生成模型是基于目标纹理图像和目标面部图像训练得到的。
可选地,所述目标纹理图像包括皮肤图像,所述皮肤图像是通过对所述第一面部图像中的目标区域上的皮肤图像进行扩充处理得到的;
所述目标区域包括额头区域。
可选地,所述额头区域是基于所述第一面部图像上的眉毛关键点和额头轮廓关键点确定得到的。
可选地,所述目标面部图像是基于目标面部器官确定得到的目标面部器官的掩膜。
可选地,所述目标面部器官的掩膜是基于所述目标面部器官在所述第一面部图像中的关键点确定得到的。
可选地,所述对所述目标区域上的皮肤图像进行的扩充处理包括:
对所述目标区域上的皮肤图像进行的镜像反射处理;
将镜像反射得到的反射图像与所述目标区域上的皮肤图像进行拼接的处理。
可选地,对所述目标区域上的皮肤图像进行的扩充处理包括:
对所述目标区域上的皮肤图像进行的复制处理,以及对复制得到的多张复制图像进行的拼接处理。
可选地,所述装置还包括:
器官图像提取单元,用于提取所述目标面部器官在所述待处理的面部图像中对应的第一器官图像;
器官图像调整单元,用于调整所述第一器官图像中的所述目标面部器官的形状和/或尺寸,得到第二器官图像;
第一图像添加单元,用于将所述第二器官图像添加到所述面部涂抹图像上。
可选地,所述装置还包括:
第二图像添加单元,用于将预设动画迁移到所述面部涂抹图像上,得到动态图像。
再一方面,本公开提供一种电子设备,包括:存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,实现如前任一项所述的方法。
再一方面,本公开提供一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如前任一项所述的方法。
本公开提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供的图像处理方法、装置、设备及存储介质,通过获取待处理的面部图像,基于预先训练得到的涂抹模型对待处理面部图像中的目标面部器官进行涂抹处理得到面部涂抹图像。本公开实施例提供的方案通过对待处理面部图像中的目标面部器官进行涂抹,能够提高图像的趣味性,继而提高用户体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种图像处理方法的流程图;
图2是本公开一个实施例提供的待处理的面部图像;
图3是本公开实施例提供的面部涂抹图像的示意图;
图4是本公开一个实施例提供的用于训练图像生成模型的面部图像;
图5是本公开另一些实施例提供的图像处理方法的流程图;
图6是采用步骤S101-步骤S105确定的面部涂抹图像;
图7是本公开再一些实施例提供图像处理方法的流程图;
图8是本公开实施例提供的图像处理装置的结构示意图;
图9是本公开实施例中的一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
图1是本公开实施例提供的一种图像处理方法的流程图,图1所示的面部图像处理方法可以由具有图像处理能力的电子设备执行。其 中,电子设备可以示例性的理解为诸如手机、平板电脑、台式电脑、一体机等设备。
如图1所示,本公开实施例提供的图像处理方法包括步骤S101-S102。
步骤S101:获取待处理的面部图像。
待处理的面部图像可以是人物的面部图像,也可以是动物的面部图像,本公开实施例并不做特别地限定。
本公开实施例中,电子设备可以采用预设的方式获得待处理的面部图像,在一些实施方式中预设方式可以包括拍摄、下载或者从本地存储器中加载等,但是在本公开实施例的其他实施方式中可以不限于拍摄、下载和从本地存储器中加载。
在本公开的一些实施例中,待处理的面部图像包括对象面部像素和其他景物像素;例如,在一种示例中,待处理的面部图像可以包括人物上半身的图像和背景图像。
步骤S102:基于预先训练得到的涂抹模型对待处理的面部图像中的目标面部器官进行涂抹处理,得到待处理的面部图像对应的面部涂抹图像。
涂抹模型是用于对待处理面部图像中的目标面部器官进行涂抹处理,以改变目标面部器官所在区域的像素特征的模型。
在将待处理面部图像输入到涂抹模型后,涂抹模型识别待处理面部图像中目标面部器官像素区域,对目标面部器官所在的区域进行涂抹,使得目标面部器官所在的区域被涂抹上目标纹理。
图2是本公开一个实施例提供的待处理的面部图像。如图2所示,待处理的面部图像是人物对象20的面部图像201。图3是本公开实施例提供的面部涂抹图像的示意图,图3是图2经过涂抹模型处理后得到的面部涂抹图像。对比图2和图3,本公开一些实施例中用于涂抹的目标面部器官可以包括眉毛、眼睛、鼻子和嘴,根据眉毛、眼睛、鼻子和嘴的特征点可以确定出包含眉毛、眼睛、鼻子和嘴的掩膜,即图3 中的T型区域301。经过涂抹模型涂抹处理后,T型区域301被涂抹上目标纹理。
根据前文对图像处理方法步骤的描述以及图2和图3的对比展示,本公开实施例提供的图像处理方法采用涂抹模型处理待处理的面部图像而得到面部涂抹图像。将前述的步骤S101和S102体现的图像处理方法集成到特定的应用程序或软件中,电子设备安装该应用程序或软件后,即可以对用户的面部图像进行涂抹处理得到涂抹后的面部图像,实现待处理图像脸部像素的变更,提高了图像的趣味性和用户体验。
本公开实施例中,涂抹模型是基于第一面部图像和第二面部图像训练得到的模型,前述第一面部图像中的目标面部器官未被涂抹,前述第二面部图像是对第一面部图像中的目标面部器官进行涂抹处理后得到的图像。
在本公开实施例中,第一面部图像和对应的第二面部图像组成了一个样本图像对,多个样本图像对可以被用于训练本公开所称的涂抹模型。为了训练涂抹模型,首先随机地初始化涂抹模型的参数。随后将样本图像对输入到初始化的涂抹模型中,训练并调整涂抹模型的参数。最后采用测试图像对参数调整后的涂抹模型进行测试。如果测试结果显示训练得到的模型满足预设的精度要求,则完成对涂抹模型的训练。
本公开的一个实施例中,可以将第一面部图像和目标面部器官的掩膜输入到预设的图像生成模型中,由图像生成模型生成得到用于训练涂抹模型的第二面部图像。其中,图像生成模型可以基于目标纹理图像和目标面部图像训练得到。
本公开实施例中,目标面部图像可以示例性的理解为基于目标面部器官确定得到的目标面部器官的掩膜。目标面部器官的掩膜,可以理解为形状与目标面部器官相匹配的区域。图4是本公开一实施例提供的用于训练图像生成模型的面部图像。如图4所示,其中虚线选框401是目标面部器官的掩膜对应的区域。
在本公开一些实施例中,目标面部器官的掩膜可以基于目标面部器官在第一面部图像中的关键点确定得到的。
请继续参见图4,在图4所示的第一面部图像中,目标面部器官被示例性的具体为眉毛、眼睛、鼻子和嘴巴。基于此四个目标面部器官确定的关键点至少包括眉毛关键点402、眼睛关键点403、嘴巴关键点404和鼻子关键点405。在确定前述目标面部器官的关键点后,基于前述的关键点即可确定虚线选框所示的区域401,该区域即为目标面部器官的掩膜对应的区域。
在本公开实施例中,目标纹理图像是指用于对目标面部器官的掩膜进行填充的图像。
在本公开一些实施例中,目标纹理图像可以是皮肤图像。具体的,目标纹理图像可以通过对第一面部图像中目标区域上的皮肤图像进行扩充处理而得到。在本公开另外一些实施例中,目标纹理图像也可以是预先选定具有其他纹理特性的图像,例如可以是肉色的磨砂图像,当然这里仅为示例说明而不是唯一限定。
在本公开一些实施例中,第一面部图像的目标区域可以是第一面部图像中的额头区域。采用额头区域的皮肤图像得到目标纹理图像,可以使得目标纹理图像较为平顺地与第一面部图像中掩膜外侧的区域连接。
在本公开的其他实施例中,第一面部图像的目标区域也可以被具体为第一面部图像中其他区域。例如在其他实施方式中,目标区域还可以被示例性的具体为脸颊区域或者下巴区域。当然这里仅为示例说明而不是唯一限定。
在本公开的一个实施例中,额头区域可以基于第一面部图像中的眉毛关键点和额头轮廓关键点确定得到。
眉毛关键点可以理解为眉毛上沿和额头区域交界处的关键点。额头轮廓的关键点可以理解为额头区域和发际交界处的关键点。因为眉毛与额头区域的交界处像素反差明显,额头轮廓与头发区域在发际交 界处反差明显,并且前述交界具有特定的曲线特征,所以在一些实施例中,眉毛关键点和额头轮廓关键点可以基于像素的反差和特定的曲线特征确定。
本公开实施例中,如果目标区域为第一面部图像中的额头区域,则目标纹理图像是基于额头区域的皮肤图像进行扩充处理得到。本公开实施例中,对额头区域的皮肤图像进行扩充处理得到目标纹理图像的方法至少包括如下几种。
第一种方法:对额头区域的皮肤图像进行镜像反射处理,得到反射图像,随后将镜像反射得到的反射图像与额头区域中的皮肤图像进行拼接而得到目标纹理图像。
对额头区域的皮肤图像进行镜像反射处理时,可以采用两个眉毛最上侧的关键点确定直线作为镜像反射面,得到反射图像。随后采用反射图像和原始的额头图像进行拼接,排除具有非皮肤特征的空隙区域而得到目标纹理图像。
第二种方法:对额头区域的皮肤图像进行复制处理,得到多张复制图像。随后将得到的多张复制图像进行拼接处理,排除具有非皮肤特征的空隙区域而得到目标纹理图像。
图5是本公开另一些实施例提供的图像处理方法的流程图。如图5所示,在本公开另一些实施例中,在上述步骤S101之后还可以包括步骤S103-S105。此处仅就图像处理方法中增加的步骤S103-S105做描述,步骤S101和S102可以参照前文叙述。
步骤S103:提取目标面部器官在待处理的面部图像中对应的第一器官图像。
在步骤S103执行过程中,可以采用特征识别算法确定目标面部器官的关键点或者边缘区域,随后基于关键点或者边缘区域确定包含目标面部器官的选区,并将选区划定的图像区域作为对应的第一器官图像。
步骤S104:调整第一器官图像中的目标面部器官的形状和/或尺寸, 得到第二器官图像。
步骤S104调整第一器官图像中目标面部器官的形状或者尺寸,可以包括如下几种方式。
第一种方式:对目标面部器官进行放大或者缩小。例如如果目标面部器官为眼睛,并且第一器官图像中的眼睛较小,可以将第一器官图像中的眼睛进行放大处理,而得到第二器官图像。
第二种方式:对目标面部器官的形状进行调整。例如如果目标面部器官为嘴巴,并且嘴巴嘴角呈现下扬状态,此时可以对第一器官图像中的嘴巴形状进行调整,使其嘴角由下扬状态修改为上扬状态,而得到第二器官图像。
应当注意的是,前述的步骤S103和S104与步骤S102是独立的,步骤S103-S104可以与步骤S102并行执行,也可以在步骤S102之前或者之后执行。
步骤S105:将第二器官图像添加到面部涂抹图像上。
步骤S105在步骤S102和步骤S104执行完成后执行。在步骤S105中,可以根据第一器官图像的位置确定第二器官图像的放置位置,随后根据前述的放置位置将第二器官图像添加至面部涂抹图像中。
图6是采用步骤S101-步骤S105确定的面部涂抹图像。对比图2和图6,在本公开一些实施例中,采用步骤S103和S104调整的第一器官图像为眼睛图像和嘴巴图像。具体的包括对眼睛图像中的眼睛进行了缩小处理,对嘴巴图像中的嘴巴进行了上扬处理,得到调整后的眼睛图像和嘴巴图像。将调整后的眼睛图像和嘴巴图像按照原眼睛和嘴巴的位置放置在面部涂抹图像中,得到如图6所示表情发生变化的面部图像。
图7是本公开再一些实施例提供图像处理方法的流程图。如图7所示,在本公开再一些实施例中,图像处理方法除了包括前述的步骤S101和S102外,还可以包括步骤S106。步骤S106在步骤S102后执行。此处仅就图像处理方法中增加的步骤S106做描述,步骤S101和 S102可以参照前文叙述。
步骤S106:将预设动画迁移到面部涂抹图像上,得到动态图像。
预设动画是预先选定的具有面部表情动作的动画。预设动画比如可以是眨眼的动画、哼哼鼻子的动画或者张嘴嚎叫的动画,但不局限于这里列举的动画。
将预设动画迁移到面部涂抹图像中,首先根据预设动画的类型确定待处理图像中对应面部器官位置,随后将预设动画放置到对应面部器官所在的位置处。例如,如果预设动画为眨眼动画,则可以将此动画放置在眼睛在面部图像对应的位置处,而得到动态图像。
本公开实施例中,将预设动画迁移到面部涂抹图像上而得到动态图像,使得面部涂抹图像具有动态效果,可以进一步的提高面部涂抹图像的趣味性,增强用户体验。
图8是本公开实施例提供的图像处理装置的结构示意图,该处理装置可以被理解为上述电子设备或者上述电子设备中的部分功能模块。如图8所示,该处理装置800包括图像获取单元801和涂抹处理单元802。
图像获取单元801用于获取待处理的面部图像;涂抹处理单元802用于基于预先训练得到的涂抹模型对待处理的面部图像中的目标面部器官进行涂抹处理,得到待处理的面部图像对应的面部涂抹图像。
其中,涂抹模型是基于目标面部器官未被涂抹的第一面部图像和第一面部图像中的目标面部器官被涂抹后得到的第二面部图像训练得到的,其中,第二面部图像是基于预设的图像生成模型生成得到的,图像生成模型是基于目标纹理图像和目标面部图像训练得到的。
在本公开一些实施例中,目标纹理图像包括皮肤图像,皮肤图像是通过对第一面部图像中的目标区域上的皮肤图像进行扩充处理得到的;目标区域包括额头区域。
在本公开另外一些实施例中,额头区域是基于第一面部图像上的眉毛关键点和额头轮廓关键点确定得到的。
在本公开一些实施例中,目标面部图像可以是基于目标面部器官确定得到的目标面部器官的掩膜;目标面部器官的掩膜可以是基于目标面部器官在第一面部图像中的关键点确定得到的。
在本公开一些实施例中,对目标区域上的皮肤图像进行的扩充处理包括:对目标区域上的皮肤图像进行的镜像反射处理;将镜像反射得到的反射图像与目标区域上的皮肤图像进行拼接的处理。
在本公开一些实施例中,对目标区域上的皮肤图像进行的扩充处理包括:对目标区域上的皮肤图像进行的复制处理,以及对复制得到的多张复制图像进行的拼接处理。
在本公开一些实施例中,图像处理装置还可以包括器官图像提取单元、器官图像调整单元和第一图像添加单元。
器官图像提取单元用于提取目标面部器官在待处理的面部图像中对应的第一器官图像;器官图像调整单元用于调整第一器官图像中的目标面部器官的形状和/或尺寸,得到第二器官图像;第一图像添加单元用于将第二器官图像添加到面部涂抹图像上。
在本公开一些实施例中,图像处理装置还可以包括第二图像添加单元;第二图像添加单元用于将预设动画迁移到面部涂抹图像上,得到动态图像。
本实施例提供的装置能够执行上述图1-图7中任一实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
本公开实施例还提供一种电子设备,该电子设备包括处理器和存储器,其中,存储器中存储有计算机程序,当计算机程序被处理器执行时可以实现上述图1-图7中任一实施例的方法。
示例的,图9是本公开实施例中的一种电子设备的结构示意图。下面具体参考图9,其示出了适于用来实现本公开实施例中的电子设备900的结构示意图。本公开的一些实施例中,电子设备900可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端 (例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储装置908加载到随机访问存储器(RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算 机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待处理的面部图像;基于预先训练得到的涂抹模型对待处理的面部图像中的目标面 部器官进行涂抹处理,得到待处理的面部图像对应的面部涂抹图像。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准 产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
本公开实施例还提供一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时可以实现上述图1-图7中任一实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说 将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理的面部图像;
    基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像;
    所述涂抹模型是基于所述目标面部器官未被涂抹的第一面部图像和所述第一面部图像中的所述目标面部器官被涂抹后得到的第二面部图像训练得到的,其中,所述第二面部图像是基于预设的图像生成模型生成得到的,所述图像生成模型是基于目标纹理图像和目标面部图像训练得到的。
  2. 根据权利要求1所述的方法,其特征在于,所述目标纹理图像包括皮肤图像,所述皮肤图像是通过对所述第一面部图像中的目标区域上的皮肤图像进行扩充处理得到的,所述目标区域包括额头区域。
  3. 根据权利要求2所述的方法,其特征在于,所述额头区域是基于所述第一面部图像上的眉毛关键点和额头轮廓关键点确定得到的。
  4. 根据权利要求1所述的方法,其特征在于,所述目标面部图像是基于目标面部器官确定得到的目标面部器官的掩膜。
  5. 根据权利要求4所述的方法,其特征在于,所述目标面部器官的掩膜是基于所述目标面部器官在所述第一面部图像中的关键点确定得到的。
  6. 根据权利要求2所述的方法,其特征在于,对所述目标区域上的皮肤图像进行的扩充处理包括:
    对所述目标区域上的皮肤图像进行的镜像反射处理;
    将镜像反射得到的反射图像与所述目标区域上的皮肤图像进行拼接的处理。
  7. 根据权利要求2所述的方法,其特征在于,对所述目标区域上 的皮肤图像进行的扩充处理包括:
    对所述目标区域上的皮肤图像进行的复制处理,以及对复制得到的多张复制图像进行的拼接处理。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述获取待处理的面部图像之后,所述方法还包括:
    提取所述目标面部器官在所述待处理的面部图像中对应的第一器官图像;
    调整所述第一器官图像中的所述目标面部器官的形状和/或尺寸,得到第二器官图像;
    所述基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像之后,所述方法还包括:
    将所述第二器官图像添加到所述面部涂抹图像上。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像之后,所述方法还包括:
    将预设动画迁移到所述面部涂抹图像上,得到动态图像。
  10. 一种图像处理装置,其特征在于,包括:
    图像获取单元,用于获取待处理的面部图像;
    涂抹处理单元,用于基于预先训练得到的涂抹模型对所述待处理的面部图像中的目标面部器官进行涂抹处理,得到所述待处理的面部图像对应的面部涂抹图像;
    所述涂抹模型是基于所述目标面部器官未被涂抹的第一面部图像和所述第一面部图像中的所述目标面部器官被涂抹后得到的第二面部图像训练得到的,其中,所述第二面部图像是基于预设的图像生成模型生成得到的,所述图像生成模型是基于目标纹理图像和目标面部图像训练得到的。
  11. 根据权利要求10所述的装置,其特征在于,
    所述目标纹理图像包括皮肤图像,所述皮肤图像是通过对所述第一面部图像中的目标区域上的皮肤图像进行扩充处理得到的;
    所述目标区域包括额头区域。
  12. 根据权利要求11所述的装置,其特征在于:
    所述额头区域是基于所述第一面部图像上的眉毛关键点和额头轮廓关键点确定得到的。
  13. 根据权利要求10所述的装置,其特征在于,所述目标面部图像是基于目标面部器官确定得到的目标面部器官的掩膜。
  14. 根据权利要求13所述的装置,其特征在于,
    所述目标面部器官的掩膜是基于所述目标面部器官在所述第一面部图像中的关键点确定得到的。
  15. 根据权利要求11所述的装置,其特征在于,所述对所述目标区域上的皮肤图像进行的扩充处理包括:
    对所述目标区域上的皮肤图像进行的镜像反射处理;
    将镜像反射得到的反射图像与所述目标区域上的皮肤图像进行拼接的处理。
  16. 根据权利要求11所述的装置,其特征在于,对所述目标区域上的皮肤图像进行的扩充处理包括:
    对所述目标区域上的皮肤图像进行的复制处理,以及对复制得到的多张复制图像进行的拼接处理。
  17. 根据权利要求10-16中任一项所述的装置,其特征在于,还包括:
    器官图像提取单元,用于提取所述目标面部器官在所述待处理的面部图像中对应的第一器官图像;
    器官图像调整单元,用于调整所述第一器官图像中的所述目标面部器官的形状和/或尺寸,得到第二器官图像;
    第一图像添加单元,用于将所述第二器官图像添加到所述面部涂 抹图像上。
  18. 根据权利要求10-16中任一项所述的装置,其特征在于,还包括:
    第二图像添加单元,用于将预设动画迁移到所述面部涂抹图像上,得到动态图像。
  19. 一种电子设备,其特征在于,包括:
    存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,实现如权利要求1-9中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如权利要求1-9中任一项所述的方法。
PCT/CN2022/091681 2021-06-10 2022-05-09 图像处理方法、装置、设备及存储介质 WO2022257677A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/568,745 US20240273688A1 (en) 2021-06-10 2022-05-09 Method, apparatus, device and storage medium for image processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110646606.2A CN113284219A (zh) 2021-06-10 2021-06-10 图像处理方法、装置、设备及存储介质
CN202110646606.2 2021-06-10

Publications (1)

Publication Number Publication Date
WO2022257677A1 true WO2022257677A1 (zh) 2022-12-15

Family

ID=77284117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091681 WO2022257677A1 (zh) 2021-06-10 2022-05-09 图像处理方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20240273688A1 (zh)
CN (1) CN113284219A (zh)
WO (1) WO2022257677A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284219A (zh) * 2021-06-10 2021-08-20 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200294239A1 (en) * 2019-03-12 2020-09-17 General Electric Company Multi-stage segmentation using synthetic images
CN111833242A (zh) * 2020-07-17 2020-10-27 北京字节跳动网络技术有限公司 人脸变换方法、装置、电子设备和计算机可读介质
CN111968029A (zh) * 2020-08-19 2020-11-20 北京字节跳动网络技术有限公司 表情变换方法、装置、电子设备和计算机可读介质
CN112489169A (zh) * 2020-12-17 2021-03-12 脸萌有限公司 人像图像处理方法及装置
CN112633144A (zh) * 2020-12-21 2021-04-09 平安科技(深圳)有限公司 人脸遮挡检测方法、系统、设备及存储介质
CN113284219A (zh) * 2021-06-10 2021-08-20 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345932B (zh) * 2018-08-29 2021-09-28 中国科学院自动化研究所 基于3d打印的医疗模型及其制作方法
CN111507937B (zh) * 2020-03-03 2024-05-10 平安科技(深圳)有限公司 一种图像数据的生成方法及装置
CN111369427B (zh) * 2020-03-06 2023-04-18 北京字节跳动网络技术有限公司 图像处理方法、装置、可读介质和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200294239A1 (en) * 2019-03-12 2020-09-17 General Electric Company Multi-stage segmentation using synthetic images
CN111833242A (zh) * 2020-07-17 2020-10-27 北京字节跳动网络技术有限公司 人脸变换方法、装置、电子设备和计算机可读介质
CN111968029A (zh) * 2020-08-19 2020-11-20 北京字节跳动网络技术有限公司 表情变换方法、装置、电子设备和计算机可读介质
CN112489169A (zh) * 2020-12-17 2021-03-12 脸萌有限公司 人像图像处理方法及装置
CN112633144A (zh) * 2020-12-21 2021-04-09 平安科技(深圳)有限公司 人脸遮挡检测方法、系统、设备及存储介质
CN113284219A (zh) * 2021-06-10 2021-08-20 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US20240273688A1 (en) 2024-08-15
CN113284219A (zh) 2021-08-20

Similar Documents

Publication Publication Date Title
WO2022068451A1 (zh) 风格图像生成方法、模型训练方法、装置、设备和介质
WO2022171024A1 (zh) 图像显示方法、装置、设备及介质
WO2020077914A1 (zh) 图像处理方法、装置、硬件装置
WO2022042290A1 (zh) 一种虚拟模型处理方法、装置、电子设备和存储介质
WO2023125374A1 (zh) 图像处理方法、装置、电子设备及存储介质
KR102248799B1 (ko) 타겟 대상 디스플레이 방법, 장치 및 전자 기기
WO2022088970A1 (zh) 一种图像的处理方法、装置、设备及存储介质
WO2022100735A1 (zh) 视频处理方法、装置、电子设备及存储介质
WO2023185671A1 (zh) 风格图像生成方法、装置、设备及介质
WO2023273697A1 (zh) 图像处理方法、模型训练方法、装置、电子设备及介质
WO2022132032A1 (zh) 人像图像处理方法及装置
WO2021088790A1 (zh) 用于目标设备的显示样式调整方法和装置
WO2023051244A1 (zh) 图像生成方法、装置、设备及存储介质
CN113706440A (zh) 图像处理方法、装置、计算机设备及存储介质
WO2022257677A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023009058A1 (zh) 图像属性分类方法、装置、电子设备、介质和程序产品
WO2024109668A1 (zh) 一种表情驱动方法、装置、设备及介质
WO2022262473A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022252871A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023103682A1 (zh) 图像处理方法、装置、设备及介质
WO2023207779A1 (zh) 图像处理方法、装置、设备及介质
WO2023098649A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023040813A1 (zh) 人脸图像处理方法、装置、设备及介质
WO2023140787A2 (zh) 视频的处理方法、装置、电子设备、存储介质和程序产品
CN110619602A (zh) 一种图像生成方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819277

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18568745

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22819277

Country of ref document: EP

Kind code of ref document: A1