WO2022262473A1 - 图像处理方法、装置、设备及存储介质 - Google Patents

图像处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022262473A1
WO2022262473A1 PCT/CN2022/091746 CN2022091746W WO2022262473A1 WO 2022262473 A1 WO2022262473 A1 WO 2022262473A1 CN 2022091746 W CN2022091746 W CN 2022091746W WO 2022262473 A1 WO2022262473 A1 WO 2022262473A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
image
video
images
image processing
Prior art date
Application number
PCT/CN2022/091746
Other languages
English (en)
French (fr)
Inventor
徐盼盼
华淼
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022262473A1 publication Critical patent/WO2022262473A1/zh

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, and in particular, to an image processing method, device, device, and storage medium.
  • a first aspect of the present disclosure provides an image processing method, including:
  • the preset image processing model is used to adjust the expression in the expression image to generate a video with expression change process, including:
  • the image processing model is trained based on facial expression images of sample objects and videos of facial expression changes of the sample objects.
  • the expression change video of the sample object is obtained by using a preset transfer model to migrate the expression change process in the preset video to the expression image of the sample object.
  • the migration model is trained based on images of multiple facial regions and expression differences between the images of the multiple facial regions, wherein the images of the multiple facial regions are of the same type of expression
  • the images of the same type of expressions have different degrees of expression in different images.
  • the image of the facial region is obtained based on facial key point extraction on a preset facial image.
  • a second aspect of the present disclosure provides an image processing device, including:
  • An emoticon image acquiring unit configured to acquire an emoticon image
  • a video generating unit configured to adjust the expression in the expression image based on a preset image processing model, and generate a video with an expression change process
  • the video display unit is used for displaying the video.
  • the video generation unit adjusts at least one of the degree of smile or the degree of eye opening and closing in the facial expression image based on a preset image processing model, and generates an image with at least one of the degree of smiling or the degree of eye opening and closing.
  • a video of the change process is not limited to:
  • the image processing model is trained based on facial expression images of sample objects and videos of facial expression changes of the sample objects.
  • the expression change video of the sample object is obtained by using a preset transfer model to migrate the expression change process in the preset video to the expression image of the sample object.
  • the migration model is trained based on images of multiple facial regions and expression differences between the images of the multiple facial regions, wherein the images of the multiple facial regions are of the same type of expression
  • the images of the same type of expressions have different degrees of expression in different images.
  • the image of the facial region is obtained based on facial key point extraction on a preset facial image.
  • a third aspect of the present disclosure provides a terminal device, including:
  • a fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the aforementioned image processing method is implemented.
  • a third aspect of the present disclosure provides a computer program including: instructions that, when executed by a processor, cause the processor to perform the aforementioned image processing method.
  • a fourth aspect of the present disclosure provides a non-transitory computer program product comprising instructions which, when executed by a processor, cause the processor to perform the image processing method as previously described.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of an emoticon image provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of an adjusted emoticon image generated based on Fig. 2;
  • Fig. 4 is a schematic diagram of another adjusted emoticon image generated based on Fig. 2;
  • Fig. 5 is a schematic diagram of another adjusted emoticon image generated based on Fig. 2;
  • Fig. 6 is an image of a face region provided by an embodiment of the present disclosure.
  • Fig. 7 is an image of a face region provided by another embodiment of the present disclosure.
  • Fig. 8 is an image of a facial area provided by yet another embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • Fig. 10 is a schematic structural diagram of a terminal device provided by an embodiment of the present disclosure.
  • embodiments of the present disclosure provide an image processing method, device, device and storage medium.
  • Fig. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure, and the method may be executed by a terminal device capable of image processing.
  • the terminal device may at least be a mobile phone, a tablet computer, a desktop computer, an all-in-one computer and other terminal devices, but is not limited to the devices listed here.
  • the image processing method provided by the embodiment of the present disclosure includes step S101-step S103.
  • Step S101 Acquiring facial expression images.
  • An emoticon image can be understood as an object image containing a certain expression.
  • the expression of the object may be, for example, smiling, serious, crying, sad, etc., but not limited thereto.
  • the expression of the subject can be presented through the shape of the subject's facial organs, and the facial organs can include the subject's eyes, nose, mouth, eyebrows, and the like.
  • the emoticon image can be interpreted as an example of an emoticon image of a real person or animal, or it can also be understood as an emoticon image of a cartoon character or cartoon animal, but the emoticon image in this embodiment may not be limited thereto , in fact, the emoticon image referred to in this embodiment may be an emoticon image of any object having an emoticon.
  • the facial expression image to be processed may be acquired in a preset manner; the preset manner may include shooting, downloading, drawing or extracting. It should be noted that the preset manner is not limited to the aforementioned shooting, downloading, drawing or extracting manners.
  • the photographing method refers to photographing an object with a photographing device configured on a terminal device to obtain an expression image of the object.
  • the downloading method refers to finding and downloading the emoticon images from the remote database.
  • the drawing method refers to using a drawing tool to draw a facial image containing a certain expression, and use the drawn facial image as the expression image referred to in this embodiment, wherein the facial image can be a realistic facial image or a cartoon face image.
  • the extraction method refers to extracting a frame image containing a certain expression from a video as the expression image referred to in this embodiment.
  • the terminal device needs to determine whether the frame image includes a certain expression of the object.
  • the terminal device can deploy a facial recognition model to identify whether the frame image in the video has a certain expression by using the facial recognition model, and then determine whether to extract the frame image as the expression image.
  • the aforementioned facial recognition model can be trained by using facial images of a large number of objects. In practical applications, the facial recognition model can be a deep learning model of various known architectures.
  • a certain frame in the video may also be selected by the user as the emoticon image referred to in this embodiment.
  • the terminal extracts the specified image as the emoticon image according to the operation performed by the user.
  • Step S102 Adjust the expression in the expression image based on the preset image processing model, and generate a video with the process of expression change.
  • the preset image processing model is specially used to adjust the facial organ features in the expression image, and realize the facial expression change by changing the facial organ features to obtain the video model of the specific expression change.
  • the preset image processing model adjusts the pixels of the image area where at least one facial organ in the expression image is located, and obtains a plurality of adjusted expression images, At least one of a shape and a position of a certain facial part in each adjusted facial expression image is different.
  • the degree of mouth smile in the expression image can be adjusted based on a preset image processing model to generate expression images with different smile degrees; in another embodiment, it can also be based on a preset image
  • the processing model adjusts the degree of eye opening and closing in the expression image to generate expression images with different degrees of eye opening and closing; in another embodiment, the eyebrow feature of the expression image can also be adjusted based on a preset image processing model, Generate expression images with different eyebrow features; in yet another embodiment, the nose features in the expression images can also be adjusted based on a preset image processing model to generate expression images with different nose features.
  • the expression adjustment object and method can be set according to the needs without being limited to a specific object or specific method.
  • at least some of the above methods can be combined to obtain an expression image with a combined expression change process represented by facial organs, for example, the degree of mouth smile and the degree of opening and closing of eyes in the expression image can be adjusted at the same time , to generate facial expression images with both the degree of smile and the degree of eye opening and closing.
  • the multiple generated expression images may be sorted in sequence based on the expression range from small to large, or from large to small, to generate a video with a specific frame rate.
  • FIG. 2 is a schematic diagram of an emoticon image provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of an adjusted emoticon image generated based on Fig. 2
  • Fig. 4 is a schematic diagram of another adjusted emoticon image generated based on Fig. 2
  • FIG. 5 is a schematic diagram of yet another adjusted facial expression image generated based on FIG. 2 .
  • step S102 the expression in the expression image is adjusted based on the preset image processing model, which is to adjust the shape of the mouth 02 of the object 01, Specifically, the upturned position of the mouth corner 021 of the subject is adjusted to obtain a plurality of adjusted images with different characteristics of the mouth corner 021 change; then, in step S102, the three adjusted images are combined according to the sort order of Fig. 3-Fig. 5
  • the video of the change of the corner of the mouth 021 can be obtained from the image, so as to represent the change of the smile degree of the subject by using the video of the change of the corner of the mouth.
  • the generated multiple expression images may also contain different expressions. Sorting to generate videos with a specific expression change process. For example, based on smiling expression images, serious expression images and sad expression images are generated, and the expression images can be arranged and generated in the order of first smiling, then serious, and then sad A video with transitions from smiling to sad.
  • Step S103 displaying the video.
  • a display device can be used to display the aforementioned video.
  • the terminal device referred to in this embodiment is specifically a smart phone
  • the smart phone executes the aforementioned steps S101 and S102, and displays the video obtained based on the steps S101 and S102 on the display screen.
  • a video with an expression change process can be generated according to the expression image.
  • the method of the aforementioned steps S101-S103 can be integrated into a specific application program or software tool, and by installing the application program or software tool on the terminal device, the terminal device can generate an image with a change in expression feature based on a certain expression image. Video, which improves the fun of the video, and then improves the user experience.
  • the aforementioned step S102 needs to use a preset image processing model to adjust the expression in the expression image.
  • the preset image processing model can be trained based on the expression image of the sample object and the expression change video of the sample object. obtained, wherein the sample object refers to an object that can display a specific expression and a continuously changing expression corresponding to the specific expression.
  • the expression image of the sample object and the corresponding expression change video can be obtained through a sample collection method.
  • an expression change video of the sample object may be obtained first, and then an image of a certain frame with a specific expression in the expression change video may be used as the expression image.
  • the expression image of the sample object can be determined first, and then the expression change process in the preset video is transferred to the expression image of the sample object by using the preset transfer model to obtain the expression change of the sample object video.
  • the preset video refers to a video with a process of changing expressions; for example, it can be a video with changes in the degree of smiling, a video with changes in the degree of opening and closing of eyes, or a video with changes in both the degree of smiling and the degree of opening and closing of eyes video.
  • the expression change process in the preset video is transferred to multiple sample objects, and the expression change videos of multiple sample objects are obtained, so that each expression change video has the same expression change trend, such as the degree of smile Or the degree of opening and closing eyes gradually increases.
  • the image processing model trained based on the expression change videos and expression images can process the video with the expression change tendency.
  • the expression trend videos with the same expression change characteristics corresponding to multiple expression images can be obtained, which simplifies the difficulty of obtaining expression change videos and improves the training images. Deal with the consistency of expression change trends in the expression change videos of the model.
  • the migration model used to obtain the expression change video can be trained based on images of multiple facial regions and the expression differences between images of multiple facial regions; the aforementioned images of multiple facial regions are the same Images of different types of expressions, and the same type of expressions have different expression levels in different images.
  • the image of the facial area can be obtained based on the facial key points on the preset facial image; wherein, the facial key points can include one or more of eye key points, mouth key points, eyebrow key points, and forehead key points.
  • FIG. 6 is an image of a facial region provided by an embodiment of the present disclosure
  • FIG. 7 is an image of a facial region provided by another embodiment of the present disclosure
  • FIG. 8 is an image of a facial region provided by another embodiment of the present disclosure.
  • the images of multiple facial regions are images with different opening degrees of the eyes 031 of the subject 03, and the expression difference between the images is the difference in the opening degrees of the eyes 031 .
  • Figure 6- Figure 6- Figure 8 is used to train the migration model, one image in Figure 6- Figure 8 is used as input image A, another image is used as output image B, and the eye opening degree of input image A and output image B is The difference is used as the expression difference a-b, and the parameters in the migration model F are optimized and trained to obtain the migration model F for obtaining the expression change video.
  • the numerical value between 1.0 represents the degree of eye opening
  • the degree of opening of the eye 03 corresponding to Figure 6 can be marked as 0.5
  • the degree of opening of the eye 03 corresponding to Figure 7 can be marked as 0.2
  • the degree of opening of the eye 03 corresponding to Figure 8 can be marked as 0.2.
  • the opening degree can be marked as 0.0, and the difference in eye opening degree between the three images can be calculated by the migration model based on the marking information.
  • the expression difference used for training the transfer model can also be obtained by processing facial region images.
  • the key points of the eyes 03 can be extracted, the expression characteristic parameters of each facial image can be determined based on the key points, and the expression differences can be determined based on the expression characteristic parameters of each facial image.
  • the eye region recognition model can be used to process Fig. 6-Fig. 8 to obtain the eye region image
  • the edge recognition algorithm can be used to process the eye region image to identify the inner corner 031 of the eye 03, the outer corner 032 of the eye, and the poles of the upper eyelid 033.
  • the pole of the lower eyelid 034 is used as the key point of the eye; then the horizontal length of the eye is determined based on the inner corner of the eye 031 and the outer corner of the eye 032;
  • the ratio is used as the expression characteristic parameter of the facial image, and the expression difference is determined based on the expression characteristic parameter of the facial image.
  • Fig. 9 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present disclosure, and the processing apparatus may be understood as the above-mentioned terminal device or some functional modules in the above-mentioned terminal device.
  • the image processing device 90 includes an expression image acquisition unit 901 , a video generation unit 902 and a video presentation unit 903 .
  • the expression image acquisition unit 901 is used to acquire the expression image; the video generation unit 902 is used to adjust the expression in the expression image based on the preset image processing model, and generate a video with the expression change process; the video display unit 903 is used to display the video.
  • the video generation unit 902 adjusts at least one of the degree of smile or the degree of eye opening and closing in the facial expression image based on a preset image processing model, and generates an image with a smile degree or degree of eye opening and closing.
  • a video of at least one change process is a process that uses the video to generate a smile degree or degree of eye opening and closing.
  • the image processing model is trained based on the expression images of the sample objects and the video of the expression changes of the sample objects.
  • the expression change video of the sample object is obtained by using a preset transfer model to migrate the expression change process in the preset video to the expression image of the sample object.
  • the migration model is trained based on images of multiple facial regions and the expression differences between the images of the multiple facial regions, wherein the images of the multiple facial regions are of the same type of expression Images, the same type of expression has different expression levels in different images.
  • the image of the facial region is extracted based on facial key points on a preset facial image.
  • the device provided in this embodiment can execute the method in any one of the above method embodiments, and its execution mode and beneficial effect are similar, and details are not repeated here.
  • An embodiment of the present disclosure also provides a terminal device, the terminal device includes a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the above-mentioned Figure 1- Figure 1 can be realized. 8. The method of any one of the embodiments.
  • FIG. 10 is a schematic structural diagram of a terminal device provided by an embodiment of the present disclosure.
  • FIG. 10 shows a schematic structural diagram of a terminal device 1000 suitable for implementing an embodiment of the present disclosure.
  • the terminal device 1000 in the embodiment of the present disclosure may include, but not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and stationary terminals such as digital TVs, desktop computers and the like.
  • the terminal device shown in FIG. 10 is only an example, and should not limit the functions and application scope of this embodiment of the present disclosure.
  • a terminal device 1000 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 1001, which may be randomly accessed according to a program stored in a read-only memory (ROM) 1002 or loaded from a storage device 1008. Various appropriate actions and processes are executed by programs in the memory (RAM) 1003 . In the RAM 1003, various programs and data necessary for the operation of the terminal device 1000 are also stored.
  • the processing device 1001, ROM 1002, and RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004 .
  • the following devices can be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 1007 such as a computer; a storage device 1008 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1009.
  • the communication means 1009 may allow the terminal device 1000 to perform wireless or wired communication with other devices to exchange data. While FIG. 10 shows terminal device 1000 having various means, it is to be understood that implementing or possessing all of the illustrated means is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via the communication means 1009, or from the storage means 1008, or from the ROM 1002.
  • the processing device 1001 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned terminal device, or may exist independently without being assembled into the terminal device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the terminal device, the terminal device: acquires an expression image; The expression is adjusted to generate a video with the process of expression change; and the video is displayed.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method in any one of the above method embodiments can be implemented, and its execution method and benefits The effects are similar and will not be repeated here.

Abstract

本公开实施例涉及一种图像处理方法、装置、设备及存储介质,能够在获取到表情图像后,基于预设的图像处理模型对表情图像中的表情进行调整,生成具有表情变化过程的视频,进而将视频展示给用户。

Description

图像处理方法、装置、设备及存储介质
相关申请的交叉引用
本申请是以CN申请号为202110668031.4,申请日为2021年6月16日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开实施例涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、设备及存储介质。
背景技术
相关技术中,用户可以通过视频、照片等方式记录生活,并上传到视频应用中供其它视频消费者进行观看。但是随着视频应用的发展,单纯的视频或图片分享,已经无法满足日益增长的用户需求。
发明内容
本公开的第一方面提供了一种图像处理方法,包括:
获取表情图像;
基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频;
展示所述视频。
可选地,所述基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频,包括:
基于预设的图像处理模型对所述表情图像中的微笑程度或睁闭眼程度中的至少一种进行调整,生成具有微笑程度或睁闭眼程度中的至少一种的变化过程的视频。
可选地,所述图像处理模型是基于样本对象的表情图像和所述样本对象的表情变化视频训练得到的。
可选地,所述样本对象的表情变化视频是采用预设的迁移模型将预设视频中的表情变 化过程迁移到所述样本对象的表情图像上得到的。
可选地,所述迁移模型是基于多个面部区域的图像,以及所述多个面部区域的图像之间的表情差异训练得到的,其中,所述多个面部区域的图像是同类型的表情的图像,所述同类型的表情在不同的图像中的表情程度不同。
可选地,所述面部区域的图像是基于预设面部图像上的面部关键点提取得到的。
本公开的第二方面提供了一种图像处理装置,包括:
表情图像获取单元,用于获取表情图像;
视频生成单元,用于基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频;
视频展示单元,用于展示所述视频。
可选地,所述视频生成单元基于预设的图像处理模型对所述表情图像中的微笑程度或睁闭眼程度中的至少一种进行调整,生成具有微笑程度或睁闭眼程度中的至少一种的变化过程的视频。
可选地,所述图像处理模型是基于样本对象的表情图像和所述样本对象的表情变化视频训练得到的。
可选地,所述样本对象的表情变化视频是采用预设的迁移模型将预设视频中的表情变化过程迁移到所述样本对象的表情图像上得到的。
可选地,所述迁移模型是基于多个面部区域的图像,以及所述多个面部区域的图像之间的表情差异训练得到的,其中,所述多个面部区域的图像是同类型的表情的图像,所述同类型的表情在不同的图像中的表情程度不同。
可选地,所述面部区域的图像是基于预设面部图像上的面部关键点提取得到的。
本公开的第三方面提供一种终端设备,包括:
存储器和耦接至存储器的处理器,所述处理器被配置为基于存储在存储器中的指令,执行如前所述的图像处理方法。
本公开的第四方面提供一种非瞬时性计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如前所述的图像处理方法。
本公开的第三方面提供一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行如前所述的图像处理方法。
本公开的第四方面提供一种非瞬时性计算机程序产品,包括指令,所述指令当由处理 器执行时使所述处理器执行如前所述的图像处理方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种图像处理方法的流程图;
图2是本公开一个实施例提供的表情图像的示意图;
图3是基于图2生成的一个调整后的表情图像的示意图;
图4是基于图2生成的另一调整后的表情图像的示意图;
图5是基于图2生成的再一调整后的表情图像的示意图;
图6是本公开一个实施例提供的面部区域的图像;
图7是本公开另一实施例提供的面部区域的图像;
图8是本公开再一实施例提供的面部区域的图像;
图9是本公开实施例提供的一种图像处理装置的结构示意图;
图10是本公开实施例提供的一种终端设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
如何对视频或图像进行处理,提高视频或图像的趣味性是当前亟需解决的技术问题。为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种图像 处理方法、装置、设备及存储介质。
图1是本公开实施例提供的一种图像处理方法的流程图,该方法可以由一种具有图像处理能力的终端设备来执行。实际应用中,终端设备至少可以为手机、平板电脑、台式电脑、一体机等终端设备,但不局限于这里列举的这些设备。
如图1所示,本公开实施例提供的图像处理方法,包括步骤S101-步骤S103。
步骤S101:获取表情图像。
表情图像可以理解为包含某种表情的对象图像。对象的表情比如可以是微笑、严肃、哭泣、难过等但不局限于此。对象的表情可以通过对象面部器官的形态呈现出来,面部器官可以包括对象的眼睛、鼻子、嘴巴、眉毛等。
本公开实施例中,表情图像可以示例性的理解为真实人物或动物的表情图像,或者也可以理解为卡通人物或者卡通动物的表情图像,但本实施例中的表情图像也可以不局限于此,实际上,本实施例所称的表情图像可以是任意拥有表情的对象的表情图像。
本公开实施例中,可以通过预设的方式获取待处理的表情图像;预设方式可以包括拍摄、下载、绘制或者提取。应当注意的是,预设方式并不限于前述的拍摄、下载、绘制或者提取方式。
拍摄方式是指采用终端设备配置的拍摄装置拍摄对象,而获得对象的表情图像。
下载方式是指从远程数据库中查找并下载得到表情图像。
绘制方式是指利用绘制工具绘制包含某种表情的面部图像,并将绘制得到的面部图像作为本实施例所称的表情图像,其中,该面部图像可以是写实的面部图像,也可以是卡通面部图像。
提取方式是指从视频中提取包含某种表情的帧图像作为本实施例所称的表情图像。
在本公开一些实施例中,如果从视频中提取帧图像作为表情图像,并且前述操作由终端设备自动执行,则终端设备需要判断帧图像中是否包括对象的某种表情。这种情况下终端设备可以部署面部识别模型,以采用面部识别模型识别视频中的帧图像是否具有某种表情,继而确定是否提取帧图像作为表情图像。前述的面部识别模型可以采用大量对象的面部图像训练得到,实际应用中面部识别模型可以是各种已知架构的深度学习模型。
在另一些实施例中,也可以由用户选择视频中的某一帧作为本实施例所称的表情图像。此时终端根据用户的执行操作,提取指定的图像作为表情图像。
步骤S102:基于预设的图像处理模型对表情图像中的表情进行调整,生成具有表情变 化过程的视频。
预设的图像处理模型是专门用于对表情图像中的面部器官特征进行调整,通过改变面部器官的特征而实现面部表情变化得到特定表情变化的视频的模型。
本公开实施例中,将表情图像输入到预设的图像处理模型后,预设图像处理模型对表情图像中的至少一个面部器官所在图像区域的像素进行调整,得到多个调整后的表情图像,各个调整后的表情图像中的某一面部器官的形态、或位置中的至少一种各不相同。
比如,在一种实施方式中可以基于预设的图像处理模型对表情图像中的嘴巴微笑程度进行调整,生成具有不同微笑程度的表情图像;在另一种实施方式中还可以基于预设的图像处理模型对表情图像中的眼睛睁闭眼程度进行调整,生成睁闭眼程度不同的表情图像;在又一种实施方式中还可以基于预设的图像处理模型对表情图像的眉毛特征进行调整,生成具有不同眉毛特征的表情图像;在又一种实施方式中还可以基于预设的图像处理模型对表情图像中的鼻子特征进行调整,生成具有不同鼻子特征的表情图像。
当然上述仅为举例说明并不是对表情调整对象和方式的唯一限定,实际上,表情的调整对象和方式可以根据需要进行设定而不必局限于某一个特定的对象或特定的方式,比如,在一些实施例中,还可以将上述至少部分方式进行组合,得到具有组合的面部器官表征的表情变化过程的表情图像,例如可以同时对表情图像中的嘴巴微笑程度和眼睛的睁闭眼程度进行调整,生成同时具有微笑程度和睁闭眼程度变化的表情图像。
本公开实施例中,在生成调整后的图像后,可以基于表情幅度由小到大,或者由大到小的顺序依次对生成的多个表情图像进行排序,生成具有特定帧频的视频。
图2是本公开一个实施例提供的表情图像的示意图;图3是基于图2生成的一个调整后的表情图像的示意图,图4是基于图2生成的另一调整后的表情图像的示意图,图5是基于图2生成的再一调整后的表情图像的示意图。
结合图2-图5,可知,在本公开的一个实施例中,在步骤S102中,基于预设的图像处理模型对表情图像中的表情进行调整,是调整了对象01的嘴巴02的形态,具体的是调整对象的嘴角021的上翘位置,而得到多个具有不同嘴角021变化特征的调整后图像;随后,步骤S102中按照图3-图5的排序顺序,组合这三个调整后的图像可以得到嘴角021变化的视频,以利用嘴角变化的视频表征对象微笑程度变化。
当然这里仅为示例说明而不是唯一限定,实际上,在其他实施例中,生成的多个表情图像也可以包含不同的表情,这种情况下可以按照特定的表情变化顺序,对多个表情图像 进行排序,生成具有特定表情变化过程的视频,比如基于微笑表情图像,生成了严肃表情图像和难过的表情图像,则可以按照先微笑后严肃,组后难过的顺序,依次对表情图像进行排列生成具有由微笑到难过的变化过程的视频。
步骤S103:展示视频。
在生成具有表情变化过程的视频后,可以采用显示装置展示前述视频。比如当本实施例所称的终端设备被具体为智能手机时,智能手机执行前述的步骤S101和S102,并在显示屏上展示基于步骤S101和S102得到的视频。
采用本公开实施例提供的图像处理方法,在获得表情图像后,可以根据表情图像生成具有表情变化过程的视频。前述的步骤S101-S103的方法可以被集成至特定的应用程序或者软件工具中,通过将该应用程序或者软件工具安装在终端设备上,能够使得终端设备基于某一表情图像生成具有表情特征变化的视频,从而提高了视频的趣味性,继而提升了用户体验。
前述的步骤S102需要采用预设的图像处理模型对表情图像中的表情进行调整,在本公开一些实施例中,预设的图像处理模型可以基于样本对象的表情图像和样本对象的表情变化视频训练得到,其中,样本对象是指能够展示特定表情和此特定表情对应的连续变化表情的对象。
在本公开的一些实施例中,样本对象的表情图像和对应的表情变化视频可以通过样本采集的方法获得。
例如,在一些实施例中,首先可以获得样本对象的一个表情变化视频,随后采用表情变化视频中具有特定表情的某一帧的图像作为表情图像。
再如,在另外一些实施例中,可以首先采集一个对象的表情图像,随后输出提示信息,提示对象如何进行表情变化并采集用户执行表情变化时的采集视频;最后,将采集视频作为前述表情图像对应的表情变化视频。
在本公开的再一些实施例中,可以先确定样本对象的表情图像,随后采用预设的迁移模型将预设视频中的表情变化过程迁移到样本对象的表情图像上,得到样本对象的表情变化视频。
预设视频是指具有表情变化过程的视频;例如其可以是具有微笑程度变化的视频,可以是具有睁闭眼程度变化的视频,还可以是具有同时兼具微笑程度变化和睁闭眼程度变化的视频。
通过采用迁移模型,将预设视频中的表情变化过程迁移到多个样本对象上,得到多个样本对象的表情变化视频,可以使得每个表情变化视频都有相同的表情变化趋势,比如微笑程度或睁闭眼程度逐渐变大。基于该些表情变化视频和表情图像训练得到的图像处理模型可以处理得到具有该种表情变化趋势的视频。
采用迁移模型基于预设视频和多个样本对象的表情图像,可以得到多个表情图像对应的具有相同表情变化特征的表情趋势视频,简化了表情变化视频获取的难度,也提高了用于训练图像处理模型的表情变化视频中表情变化趋势的一致性。
本公开一些实施例中,用于获取表情变化视频的迁移模型可以基于多个面部区域的图像,以及多个面部区域的图像之间的表情差异训练得到;前述的多个面部区域的图像是同类型的表情的图像,并且同类型的表情在不同的图像中的表情程度不同。面部区域的图像可以基于预设面部图像上的面部关键点得到;其中,面部关键点可以包括眼睛关键点、嘴巴关键点、眉毛关键点、额头关键点中的一种或者多种。
图6是本公开一个实施例提供的面部区域的图像,图7是本公开另一实施例提供的面部区域的图像,图8是本公开再一实施例提供的面部区域的图像。
如图6-图8所示,本公开一个实施例中,多个面部区域的图像是对象03的眼睛031睁开度不同的图像,图像之间的表情差异为眼睛031的睁开度的差异。
如果采用图6-图8训练迁移模型,则分别将图6-图8中的一个图像作为输入图像A,将另一图像作为输出图像B,将输入图像A和输出图像B的眼睛睁开度差值作为表情差异a-b,对迁移模型F中的参数进行优化训练,而得到用于获取表情变化视频的迁移模型F。
在本公开一些实施例中,前述图6-图8中各个面部区域图像中眼睛030的睁开度可以由人工标记,模型根据标记计算图像间睁闭眼程度的差异;例如,如果采用0.0-1.0之间的数值表征眼睛睁开度,则图6对应的眼睛03的睁开度可以标记为0.5,图7对应的眼睛03的睁开度可以标记为0.2,图8对应的眼睛03的睁开度可以标记为0.0,三个图像之间眼睛睁开度的差异,可以由迁移模型根据标记信息计算得到。
在本公开的另外一些实施例中,用于训练迁移模型的表情差异还可以通过处理面部区域图像得到。例如,针对图6-图8所示的面部区域的图像,可以提取眼睛03的关键点,基于关键点确定各个面部图像的表情特征参数,再基于各个面部图像的表情特征参数确定表情差异。
具体的,可以采用眼部区域识别模型处理图6-图8得到眼部区域图像,并采用边缘识 别算法处理眼部区域图像,识别眼睛03的内眼角031、外眼角032、上眼睑033极点和下眼睑034极点作为眼睛关键点;随后基于内眼角031和外眼角032确定眼睛横向长度,基于上眼睑033极点和下眼睑034极点确定眼睛03的纵向宽度;最后,采用眼睛纵向宽度和横向长度的比值作为面部图像的表情特征参数,并基于面部图像的表情特征参数确定表情差异。
本公开前述实施例,是以用于训练迁移模型的输入面部区域图像为三张为例进行的说明,在其他应用中,用于训练迁移模型的输入面部区域图像的数量也可以不局限于三张。
图9是本公开实施例提供的一种图像处理装置的结构示意图,该处理装置可以被理解为上述终端设备或者上述终端设备中的部分功能模块。如图9所示,图像处理装置90包括表情图像获取单元901、视频生成单元902和视频展示单元903。
表情图像获取单元901用于获取表情图像;视频生成单元902用于基于预设的图像处理模型对表情图像中的表情进行调整,生成具有表情变化过程的视频;视频展示单元903用于展示视频。
在本公开的一些实施例中,视频生成单元902基于预设的图像处理模型对表情图像中的微笑程度或睁闭眼程度中的至少一种进行调整,生成具有微笑程度或睁闭眼程度中的至少一种的变化过程的视频。
在本公开的一些实施例中,图像处理模型是基于样本对象的表情图像和样本对象的表情变化视频训练得到的。
在本公开的再一些实施例中,样本对象的表情变化视频是采用预设的迁移模型将预设视频中的表情变化过程迁移到样本对象的表情图像上得到的。
在本公开的一些实施例中,迁移模型是基于多个面部区域的图像,以及多个面部区域的图像之间的表情差异训练得到的,其中,多个面部区域的图像是同类型的表情的图像,同类型的表情在不同的图像中的表情程度不同。
在本公开的一些实施例中,面部区域的图像是基于预设面部图像上的面部关键点提取得到的。
本实施例提供的装置能够执行上述任一方法实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
本公开实施例还提供一种终端设备,该终端设备包括处理器和存储器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时可以实现上述图1-图8 中任一实施例的方法。
示例的,图10是本公开实施例提供的一种终端设备的结构示意图。下面具体参考图10,其示出了适于用来实现本公开实施例中的终端设备1000的结构示意图。本公开实施例中的终端设备1000可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图10示出的终端设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,终端设备1000可以包括处理装置(例如中央处理器、图形处理器等)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储装置1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。在RAM 1003中,还存储有终端设备1000操作所需的各种程序和数据。处理装置1001、ROM 1002以及RAM 1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
通常,以下装置可以连接至I/O接口1005:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1006;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1007;包括例如磁带、硬盘等的存储装置1008;以及通信装置1009。通信装置1009可以允许终端设备1000与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的终端设备1000,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1009从网络上被下载和安装,或者从存储装置1008被安装,或者从ROM 1002被安装。在该计算机程序被处理装置1001执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编 程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述终端设备中所包含的;也可以是单独存在,而未装配入该终端设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端设备执行时,使得该终端设备:获取表情图像;基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频;展示所述视频。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产 品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
本公开实施例还提供一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时可以实现上述任一方法实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些 要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (16)

  1. 一种图像处理方法,包括:
    获取表情图像;
    基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频;
    展示所述视频。
  2. 根据权利要求1所述的方法,其中,所述基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频,包括:
    基于预设的图像处理模型对所述表情图像中的微笑程度或睁闭眼程度中的至少一种进行调整,生成具有微笑程度或睁闭眼程度中的至少一种的变化过程的视频。
  3. 根据权利要求1所述的方法,其中,所述图像处理模型是基于样本对象的表情图像和所述样本对象的表情变化视频训练得到的。
  4. 根据权利要求3所述的方法,其中,所述样本对象的表情变化视频是采用预设的迁移模型将预设视频中的表情变化过程迁移到所述样本对象的表情图像上得到的。
  5. 根据权利要求4所述的方法,其中,所述迁移模型是基于多个面部区域的图像,以及所述多个面部区域的图像之间的表情差异训练得到的,其中,所述多个面部区域的图像是同类型的表情的图像,所述同类型的表情在不同的图像中的表情程度不同。
  6. 根据权利要求5所述的方法,其中,所述面部区域的图像是基于预设面部图像上的面部关键点提取得到的。
  7. 一种图像处理装置,包括:
    表情图像获取单元,用于获取表情图像;
    视频生成单元,用于基于预设的图像处理模型对所述表情图像中的表情进行调整,生成具有表情变化过程的视频;
    视频展示单元,用于展示所述视频。
  8. 根据权利要求7所述的装置,其中,所述视频生成单元基于预设的图像处理模型对所述表情图像中的微笑程度或睁闭眼程度中的至少一种进行调整,生成具有微笑程度或睁闭眼程度中的至少一种的变化过程的视频。
  9. 根据权利要求7所述的装置,其中,所述图像处理模型是基于样本对象的表情图像和所述样本对象的表情变化视频训练得到的。
  10. 根据权利要求9所述的装置,其中,述样本对象的表情变化视频是采用预设的迁移模型将预设视频中的表情变化过程迁移到所述样本对象的表情图像上得到的。
  11. 根据权利要求10所述的装置,其中,所述迁移模型是基于多个面部区域的图像,以及所述多个面部区域的图像之间的表情差异训练得到的,其中,所述多个面部区域的图像是同类型的表情的图像,所述同类型的表情在不同的图像中的表情程度不同。
  12. 根据权利要求11所述的装置,其中,所述面部区域的图像是基于预设面部图像上的面部关键点提取得到的。
  13. 一种终端设备,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1-6中任一项所述的图像处理方法。
  14. 一种非瞬时性计算机可读存储介质,其中,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如权利要求1-6中任一项所述的方法。
  15. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-6中任一项所述的图像处理方法。
  16. 一种非瞬时性计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行根据1-6中任一项所述的图像处理方法。
PCT/CN2022/091746 2021-06-16 2022-05-09 图像处理方法、装置、设备及存储介质 WO2022262473A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110668031.4 2021-06-16
CN202110668031.4A CN113409208A (zh) 2021-06-16 2021-06-16 图像处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022262473A1 true WO2022262473A1 (zh) 2022-12-22

Family

ID=77684456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091746 WO2022262473A1 (zh) 2021-06-16 2022-05-09 图像处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113409208A (zh)
WO (1) WO2022262473A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409208A (zh) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197533A (zh) * 2017-12-19 2018-06-22 迈巨(深圳)科技有限公司 一种基于用户表情的人机交互方法、电子设备及存储介质
US20200118269A1 (en) * 2018-10-15 2020-04-16 Siemens Healthcare Gmbh Evaluating a condition of a person
CN111383307A (zh) * 2018-12-29 2020-07-07 上海智臻智能网络科技股份有限公司 基于人像的视频生成方法及设备、存储介质
CN111401101A (zh) * 2018-12-29 2020-07-10 上海智臻智能网络科技股份有限公司 基于人像的视频生成系统
CN111432233A (zh) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 用于生成视频的方法、装置、设备和介质
CN113409208A (zh) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102387570B1 (ko) * 2016-12-16 2022-04-18 삼성전자주식회사 표정 생성 방법, 표정 생성 장치 및 표정 생성을 위한 학습 방법
CN111274447A (zh) * 2020-01-13 2020-06-12 深圳壹账通智能科技有限公司 基于视频的目标表情生成方法、装置、介质、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197533A (zh) * 2017-12-19 2018-06-22 迈巨(深圳)科技有限公司 一种基于用户表情的人机交互方法、电子设备及存储介质
US20200118269A1 (en) * 2018-10-15 2020-04-16 Siemens Healthcare Gmbh Evaluating a condition of a person
CN111383307A (zh) * 2018-12-29 2020-07-07 上海智臻智能网络科技股份有限公司 基于人像的视频生成方法及设备、存储介质
CN111401101A (zh) * 2018-12-29 2020-07-10 上海智臻智能网络科技股份有限公司 基于人像的视频生成系统
CN111432233A (zh) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 用于生成视频的方法、装置、设备和介质
CN113409208A (zh) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAN LIJIE, HUANG WENBING, GAN CHUANG, HUANG JUNZHOU, GONG BOQING: "Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation", PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, vol. 33, no. 01, 31 December 2019 (2019-12-31), pages 3510 - 3517, XP093015572, ISSN: 2159-5399, DOI: 10.1609/aaai.v33i01.33013510 *

Also Published As

Publication number Publication date
CN113409208A (zh) 2021-09-17

Similar Documents

Publication Publication Date Title
US11973732B2 (en) Messaging system with avatar generation
CN110766777B (zh) 虚拟形象的生成方法、装置、电子设备及存储介质
WO2022083383A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2022166872A1 (zh) 一种特效展示方法、装置、设备及介质
CN111476871B (zh) 用于生成视频的方法和装置
WO2023125374A1 (zh) 图像处理方法、装置、电子设备及存储介质
US20240013359A1 (en) Image processing method, model training method, apparatus, medium and device
WO2022037602A1 (zh) 表情变换方法、装置、电子设备和计算机可读介质
WO2023045710A1 (zh) 多媒体显示及匹配方法、装置、设备及介质
WO2021088790A1 (zh) 用于目标设备的显示样式调整方法和装置
WO2021190625A1 (zh) 拍摄方法和设备
WO2022171024A1 (zh) 图像显示方法、装置、设备及介质
WO2023051244A1 (zh) 图像生成方法、装置、设备及存储介质
US20240040069A1 (en) Image special effect configuration method, image recognition method, apparatus and electronic device
WO2023109829A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023273697A1 (zh) 图像处理方法、模型训练方法、装置、电子设备及介质
CN115311178A (zh) 图像拼接方法、装置、设备及介质
WO2022262473A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022252871A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023143118A1 (zh) 图像处理方法、装置、设备及介质
WO2022257677A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023241377A1 (zh) 视频数据的处理方法、装置、设备、系统及存储介质
WO2023138441A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023140787A2 (zh) 视频的处理方法、装置、电子设备、存储介质和程序产品
CN113905177B (zh) 视频生成方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823956

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18569838

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE