WO2021004114A1 - Automatic meme generation method and apparatus, computer device and storage medium - Google Patents

Automatic meme generation method and apparatus, computer device and storage medium Download PDF

Info

Publication number
WO2021004114A1
WO2021004114A1 PCT/CN2020/085573 CN2020085573W WO2021004114A1 WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1 CN 2020085573 W CN2020085573 W CN 2020085573W WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1
Authority
WO
WIPO (PCT)
Prior art keywords
emoticon
facial features
facial
face image
package
Prior art date
Application number
PCT/CN2020/085573
Other languages
French (fr)
Chinese (zh)
Inventor
向纯玉
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021004114A1 publication Critical patent/WO2021004114A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed are an automatic meme generation method and apparatus, a computer device and a storage medium. The method comprises: extracting a facial micro-expression from a facial image, and acquiring an expression tag of the facial image according to the facial micro-expression (S20); according to the expression tag of the facial image, matching a meme picture from a preset meme library, and determining positions of five sense organs in the matched meme picture (S30); and extracting the facial features in the facial image, and overlapping the facial features with the positions of the five sense organs in the meme picture which is matched from the preset meme library, so as to generate a personalized meme (S40). The method is easily operated, and since the facial image in the generated personalized meme is consistent with the original meme picture, the effect and consistency of fusing the facial features of the facial image with the original meme picture are better, thus improving the user experience, and also improving the degree of user activity and degree of engagement.

Description

表情包自动生成方法、装置、计算机设备及存储介质Method, device, computer equipment and storage medium for automatically generating emoticons
本申请要求于2019年7月5日提交中国专利局、申请号为201910602401.7,发明名称为“表情包自动生成方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office on July 5, 2019, the application number is 201910602401.7, and the invention title is "method, device, computer equipment and storage medium for automatically generating emoticons", all of which are approved The reference is incorporated in this application.
技术领域Technical field
本申请涉及人工智能中的微表情识别领域,具体涉及一种表情包自动生成方法、装置、计算机设备及存储介质。This application relates to the field of micro-expression recognition in artificial intelligence, and in particular to a method, device, computer equipment and storage medium for automatically generating an expression pack.
背景技术Background technique
随着通信技术的发展,手机的使用越来越广泛,极大的扩展了人们的社交范围。基于社交范围的扩大,用户使用手机即时通信软件进行交流的情况也越来越多。为了用户之间的交流和沟通,很多社交应用中都存在聊天功能,用户可以通过聊天框进行对话或者互相发送各种各样的表情包,来表达难以通过文字述说的情绪。With the development of communication technology, mobile phones are used more and more widely, which greatly expands people's social scope. Due to the expansion of the social network, users use mobile instant messaging software to communicate more and more. In order to communicate and communicate between users, many social applications have chat functions. Users can talk through the chat box or send various emoticons to each other to express emotions that are difficult to express in words.
在现实应用中,用户发送的表情包大多是从专门制作表情包的第三方获取的,即第三方根据其收集的素材,生成表情包后,发布到网络中,用户从第三方提供的表情包中获取自己感兴趣的表情包进行使用。但是,发明人意识到,在这种情况下,用户是被动的接受或是被动的选择表情包,可能经常会出现无法达到自己想要的效果的情况。In real applications, most emoticons sent by users are obtained from third parties who specialize in making emoticons, that is, the third party generates emoticons based on the materials collected by them, and then publishes them to the network. The emoticons provided by users from third parties Get the emoticon package you are interested in to use. However, the inventor realizes that in this case, if the user passively accepts or passively selects the emoticon package, it may often happen that the desired effect cannot be achieved.
发明内容Summary of the invention
本申请实施例提供一种表情包自动生成方法、装置、计算机设备及存储介质,本申请操作简易,生成的个性化表情包的融合效果和一致性更好,提升了用户体验,也提高了用户活跃度和参与度。The embodiments of this application provide a method, device, computer equipment and storage medium for automatically generating emoticons. The operation of this application is simple, the fusion effect and consistency of the generated personalized emoticons are better, the user experience is improved, and the user is also improved. Activity and participation.
一种表情包自动生成方法,包括:A method for automatically generating emoticons, including:
获取人脸图像;Obtain face images;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
一种表情包自动生成装置,包括:A device for automatically generating emoticons, including:
获取模块,用于获取人脸图像;The acquisition module is used to acquire a face image;
提取模块,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;An extraction module for extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
匹配模块,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;The matching module is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset The emoticon package pictures in the emoticon package library all have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
覆盖模块,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。An overlay module for extracting facial features in the face image, and overlaying the facial features to the location of the facial features of the emoticon package picture matched from the preset emoticon package library , Generate personalized emoticons.
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述表情包自动生成方法。A computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor. The processor executes the computer-readable instructions to realize the above-mentioned emoticon automatic generation method.
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述表情包自动生成方法。A computer-readable storage medium, which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, realizes the aforementioned emoticon package automatic generation method.
附图说明Description of the drawings
图1是本申请一实施例中表情包自动生成方法的应用环境示意图;FIG. 1 is a schematic diagram of an application environment of a method for automatically generating emoticons in an embodiment of the present application;
图2是本申请一实施例中表情包自动生成方法的流程图;2 is a flowchart of a method for automatically generating emoticons in an embodiment of the present application;
图3是本申请一实施例中表情包自动生成方法的步骤S20的流程图;3 is a flowchart of step S20 of the method for automatically generating emoticons in an embodiment of the present application;
图4是本申请一实施例中表情包自动生成方法的步骤S30的流程图;4 is a flowchart of step S30 of the method for automatically generating emoticons in an embodiment of the present application;
图5是本申请一实施例中表情包自动生成方法的步骤S40的流程图;5 is a flowchart of step S40 of the method for automatically generating emoticons in an embodiment of the present application;
图6是本申请一实施例中表情包自动生成方法的步骤S407的流程图;FIG. 6 is a flowchart of step S407 of the method for automatically generating emoticons in an embodiment of the present application;
图7是本申请一实施例中表情包自动生成装置的原理框图;FIG. 7 is a schematic block diagram of a device for automatically generating emoticons in an embodiment of the present application;
图8是本申请一实施例中表情包自动生成装置的提取模块的原理框图;FIG. 8 is a schematic block diagram of the extraction module of the emoticon automatic generation device in an embodiment of the present application;
图9是本申请一实施例中计算机设备的示意图。Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
具体实施方式Detailed ways
本申请提供的表情包自动生成方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The method for automatically generating emoticons provided by this application can be applied in the application environment as shown in Fig. 1, where the client (computer equipment) communicates with the server through the network. Among them, the client (computer equipment) includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices. The server can be implemented as an independent server or a server cluster composed of multiple servers.
在一实施例中,如图2所示,提供一种表情包自动生成方法,以该方法应用在图1中的服务器为例进行说明,包括以下步骤S10-S40:In one embodiment, as shown in FIG. 2, a method for automatically generating emoticons is provided. Taking the method applied to the server in FIG. 1 as an example, the method includes the following steps S10-S40:
S10,获取人脸图像;S10, acquiring a face image;
所述人脸图像是指包含部分或全部人脸五官的图像,所述人脸图像可以由用户通过摄像设备拍摄并上传值服务器,亦可以预先存储在数据库中,服务器可以根据需求随时从数据库中调取。The face image refers to an image containing part or all of the facial features. The face image can be taken by the user through a camera device and uploaded to the value server, or it can be pre-stored in the database, and the server can retrieve it from the database at any time as required. Retrieval.
S20,自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;S20, extracting facial micro-expressions from the facial image, and acquiring the facial expression tag of the facial image according to the facial micro-expressions;
也即,在该实施例中,需要首先提取所述人脸图像中的人脸微表情,并根据所述人脸微表情去确定与其对应的表情标签,并将该表情标签确定为该人脸图像的表情标签。That is, in this embodiment, it is necessary to first extract the facial micro-expression in the facial image, and determine the corresponding facial expression label based on the facial micro-expression, and determine the facial expression label as the human face The emoticon tag of the image.
在一实施例中,如图3所示,所述步骤S20,也即所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:In an embodiment, as shown in FIG. 3, the step S20, that is, the extraction of facial micro-expressions from the facial image, and obtaining the facial expression of the facial image according to the facial micro-expressions Labels, including:
S201,自所述人脸图像中提取人脸微表情的所有动作单元类型;S201, extracting all action unit types of the facial micro-expression from the facial image;
其中,所述动作单元类型可以包括但不限定于为以下表1中国际上通用的动作单元(AU)以及眼球动态等,所述眼球动态具体为眼球的不同动作和视角,比如眼球向左、向右、向上、向下、右上看等,且眼球的不同动作和视角对应的动作单元中还可以包括对眼球动作的幅度大小进行判断。Wherein, the type of the action unit may include, but is not limited to, the internationally used action units (AU) and eye movements in Table 1 below. The eye movements are specifically different movements and perspectives of the eye, such as the left eye, Look right, upward, downward, upper right, etc., and the action units corresponding to different eye movements and viewing angles may also include judging the magnitude of eye movements.
S202,根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型。S202: Confirm the micro-expression type of the face image based on extracting all the action unit types from the face image.
也即,数据库中预先存储有各种微表情类型(比如微表情类型为哭、笑或者生气等)所对应的动作单元类型,每一种微表情类型对应有多种动作单元类型的组合,比如,微表情类型为笑,这一微表情类型对应有至少以下动作单元类型的组合:嘴角上扬(表1中的AU12)、嘴角上扬(表1中的AU12)+外眉上扬(表1中的AU2)、嘴角上扬(表1中的AU12)+嘴唇伸展(表1中的AU20)+上下嘴唇分开(表1中的AU25)等;因此,只要将在所述步骤S201中提取的所有所述动作单元类型,与数据库中存储的各微表情类型的所对应的动作单元类型进行比对,即可确认所述微表情的类型。可理解地,在本实施例一方面,只要在所述步骤S201中提取的所有所述动作单元类型中,包含数据库中存储的一个微表情类型的所对应的所有动作单元类型(也即,在所述步骤S201中提取的所有所述动作单元类型中还可以包含其他动作单元),即可认为所述微表情类型为该微表情类型。在本实施例另一方面,亦可以仅在在所述步骤S201中提取的所有所述动作单元类型,与数据库中存储的一个微表情类型的动作单元类型及序列一一对应(不可多或者少任何一个动作单元)时,才认为所述被监护人员的微表情类型为该微表情类型。That is, the database pre-stores the action unit types corresponding to various micro-expression types (such as crying, laughing, or angry), and each micro-expression type corresponds to a combination of multiple action unit types, such as , The micro expression type is laugh, this micro expression type corresponds to a combination of at least the following action unit types: mouth up (AU12 in Table 1), mouth up (AU12 in Table 1) + outer eyebrows up (in Table 1 AU2), the corners of the mouth are raised (AU12 in Table 1) + lip extension (AU20 in Table 1) + the upper and lower lips are separated (AU25 in Table 1), etc.; therefore, it is only necessary to remove all the words extracted in step S201 The action unit type is compared with the corresponding action unit type of each micro-expression type stored in the database to confirm the type of the micro-expression. Understandably, in one aspect of this embodiment, as long as all the action unit types extracted in step S201 include all the action unit types corresponding to a micro-expression type stored in the database (that is, in All the action unit types extracted in the step S201 may also include other action units), that is, the micro-expression type can be regarded as the micro-expression type. In another aspect of this embodiment, all the action unit types extracted in the step S201 can also correspond one-to-one with the action unit types and sequences of a micro-expression type stored in the database (no more or less Any action unit) is considered to be the micro-expression type of the monitored person.
表1部分AUTable 1 Part AU
[Table 1][Table 1]
AU标号AU label AU描述AU description
AU1AU1 内眉上扬Inner eyebrows raised
AU2AU2 外眉上扬Raised eyebrows
AU4AU4 眉毛下压Eyebrow depression
AU5AU5 上眼脸上扬Face up
AU6AU6 脸颊抬起Raised cheeks
AU7AU7 眼睑收紧Eyelid tightening
AU9AU9 鼻子蹙皱Wrinkled nose
AU10AU10 上唇上扬Upper lip up
AU12AU12 嘴角上扬Mouth up
AU14AU14 收紧嘴角Tighten the corners of the mouth
AU15AU15 嘴角下拉Mouth drop
AU16AU16 下嘴唇下压Lower lip
AU17AU17 下巴缩紧Chin tightening
AU18AU18 嘴唇褶皱Lip folds
AU20AU20 嘴唇伸展Lip stretch
AU23AU23 嘴唇收缩Lips shrink
AU24AU24 嘴唇压紧Lip compression
AU25AU25 上下嘴唇分开Separate upper and lower lips
AU26AU26 下颚下拉Jaw drop
S203,获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;S203: Acquire all the emoticon tags associated with the micro-expression type, and simultaneously acquire a characteristic action unit associated with each emoticon tag;
也即,数据库中预先存储有与各所述微表情类型关联的表情标签,且每一种微 表情类型对应于多个表情标签,比如,微表情类型为笑,其关联的表情标签可能包括:大笑、微笑、奸笑、苦笑不得、傻笑等。可理解地,每一个与微表情类型关联的表情标签都有其对应的至少一个特征动作单元。That is, the database has pre-stored expression tags associated with each of the micro expression types, and each micro expression type corresponds to multiple expression tags. For example, if the micro expression type is laugh, the associated expression tags may include: Laugh, smile, smirk, wry smile, smirk, etc. Understandably, each emoticon tag associated with a micro-emoji type has at least one characteristic action unit corresponding to it.
S204,将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。S204. Matching all the action unit types extracted from the face image with the characteristic action units associated with each expression tag, and extracting all the action unit types from the face image When all the characteristic action units associated with one expression label are included, the expression label is recorded as the expression label of the face image.
在本实施例中,只有在所述步骤S201中自人脸图像中提取的所有所述动作单元类型中包含一个表情标签关联的所有所述特征动作单元时(也即,在所述步骤S201中提取的所有所述动作单元类型中还可以包含除该表情标签对应的所述特征动作单元之外的其他动作单元),即可认为上述表情标签即为人脸图像的表情标签。在上述实施例中,首先根据人脸图像中提取的动作单元类型确认微表情类型(微表情类型数量远小于表情标签的数量),之后才将人脸图像中提取的动作单元类型与该微表情类型关联的表情标签的特征动作单元进行匹配,如此,人脸图像中提取的动作单元类型无需与所有的表情标签进行对比,仅需要与少数几类微表情类型对应的表情标签的特征动作单元进行比对,在表情标签的数量巨大的情况下,大大减少了计算量,减轻了服务器负载。In this embodiment, only when all the action unit types extracted from the face image in step S201 include all the feature action units associated with an emoticon tag (that is, in step S201) All the extracted action unit types may also include other action units other than the characteristic action unit corresponding to the emoticon tag, that is, the emoticon tag can be regarded as the emoticon tag of the face image. In the above embodiment, the type of micro-expression is first confirmed according to the type of action unit extracted from the face image (the number of micro-expression types is much smaller than the number of emoticon tags), and then the type of action unit extracted from the face image is compared with the micro-expression The feature action unit of the type-associated emoticon tag is matched. In this way, the action unit type extracted from the face image does not need to be compared with all the emoticon tags, only the feature action unit of the emoticon tag corresponding to a few types of micro-expression types is required. By comparison, when the number of emoticons is huge, the amount of calculation is greatly reduced and the server load is reduced.
可理解地,在一实施例中,亦可以直接在步骤S201中自所述人脸图像中提取人脸微表情的所有动作单元类型之后,直接获取所有的表情标签以及与每一个所述表情标签关联的特征动作单元;再进入步骤S204中将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。在该实施例中,无需首先根据人脸图像中提取的动作单元类型确认微表情类型,而是直接将人脸图像中提取的动作单元类型与表情标签的特征动作单元进行匹配,简化了比对步骤,在表情标签的数量相对较少时,该实施例可以被优先选取使用。Understandably, in an embodiment, after extracting all the action unit types of the facial micro-expression from the facial image in step S201, all the facial expression tags and the corresponding expression tags can be directly obtained. Associated feature action unit; then in step S204, all the action unit types extracted from the face image are matched with the feature action unit associated with each expression tag, and then the face image When extracting all the characteristic action units associated with one expression tag among all the action unit types in the extraction, the expression label is recorded as the expression label of the face image. In this embodiment, there is no need to first confirm the micro-expression type according to the type of the action unit extracted from the face image, but directly match the type of the action unit extracted from the face image with the feature action unit of the emoticon tag, which simplifies the comparison. Step: When the number of emoticons is relatively small, this embodiment can be preferentially selected for use.
S30,根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确 定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;S30, according to the facial expression tag of the face image, match an emoticon package picture from a preset emoticon package library, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset emoticon package library Each of the emoticon package pictures in has at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
也即,在本实施例中,首先需要根据步骤S20中获取到的该人脸图像的表情标签去获取预设表情包库中的与该表情标签关联的表情包图片(一个表情包图片可能对应于一个或多个表情标签),此时,可能获取到的与该表情标签对应的表情包图片的数量大于一个,此时,需要根据需求选取其中的一个表情包图片,并将选取的该表情包图片记录为自预设表情包库中匹配到的所述表情包图片。且在自预设表情包库中选取与所述人脸图像的表情标签匹配的唯一的一个表情包图片之后,需要确定该表情包图片中的五官部位所处的位置,进而用所述人脸图像中提取出来的面部特征覆盖至该五官部位,替换该表情包图片中的五官位置对应的图像,生成新的个性化的表情包。That is, in this embodiment, it is first necessary to obtain the emoticon package picture associated with the emoticon tag in the preset emoticon package library according to the emoticon tag of the face image acquired in step S20 (a emoticon package picture may correspond to For one or more emoticon tags), at this time, the number of emoticon package pictures that may be obtained corresponding to the emoticon label is greater than one. At this time, one of the emoticon package pictures needs to be selected according to the needs, and the selected emoticon The package picture record is the emoticon package picture matched from a preset emoticon package library. And after selecting the only emoticon package picture matching the emoticon tag of the face image from the preset emoticon package library, it is necessary to determine the location of the facial features in the emoticon package picture, and then use the face The facial features extracted from the image cover the facial features, and the image corresponding to the facial features in the emoticon package picture is replaced to generate a new personalized emoticon package.
在一实施例中,如图4所示,所述步骤S30,也即所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:In one embodiment, as shown in FIG. 4, the step S30, that is, the matching emoticon package pictures from a preset emoticon package library according to the emoticon tags of the facial images, and determining the matched emoticons The location of the facial features of the package picture, including:
S301,获取自所述人脸图像中提取的人脸脸部轮廓;在本实施例中,所述人脸脸部轮廓也即是所述人脸图像中的人脸的脸部的边缘轮廓。S301. Acquire a face contour extracted from the face image; in this embodiment, the face contour is also the edge contour of the face in the face image.
S302,自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;S302: Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
S303,确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;所述五官部位的所处位置是指一个表情包图片中的所有具有五官的对象(比如人、动物或动漫人物等)的五官所处位置,所述所处位置轮廓是指五官部位对应位置的轮廓(比如人脸的脸部轮廓)。S303: Determine the location of the facial features and the contour of the location of the facial features of each selected emoticon package picture; the location of the facial features refers to all the objects with facial features in one emoticon package picture The location of the five sense organs (such as a person, an animal, an animation character, etc.), and the location contour refers to the contour of the corresponding position of the five sense organs (such as the facial contour of a human face).
S304,获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;所述相似度可以根据两者占据的面积大小、两者的轮廓线弧度变化等相似参数进行比对,可以给各个相似参数设定不同的权重,进而将各相似参数进行归一化处理之后,分别乘以对应的权重之后,将归一化处理之后的相抵参数与权重的乘积之和,作为所述相似度的判断标准,乘积之和越大,所述相似度越高。S304. Obtain the similarity between the facial contour of the human face and the contour of the position; the similarity can be compared according to similar parameters such as the area occupied by the two, the curvature of the two contours, and other similar parameters. Different weights can be set for each similar parameter, and then the similar parameters are normalized and then multiplied by the corresponding weights respectively, and then the sum of the product of the normalized offset parameter and the weight is taken as the said The criterion of similarity, the greater the sum of products, the higher the similarity.
S305,将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。S305. Record the emoticon package picture corresponding to the location contour with the highest similarity as the emoticon package picture that uniquely matches the face image, and at the same time obtain the unique match with the face image The location of the facial features of the emoticon package picture.
也即,在该实施例中,可以是指自所述预设表情包库中与该人脸图像的表情标签对应的所有表情包图片中,选取脸部轮廓与所述人脸图像的脸部轮廓的相似度最高的所述表情包图片,将其记录为自预设表情包库中匹配到的所述表情包图片,如此,可以方便后续将所述人脸图像中提取出来的面部特征最大程度地适配覆盖至该五官部位,替换该表情包图片中的五官位置对应的图像,生成新的个性化的表情包。That is, in this embodiment, it may refer to selecting the facial contour and the facial image of the facial image from all the facial expression package pictures corresponding to the facial expression tag of the facial image in the preset facial expression package library. The emoticon picture with the highest contour similarity is recorded as the emoticon picture matched from the preset emoticon library, so that it is convenient to extract the largest facial features from the face image later It is adapted to the extent to cover the facial features, replacing the image corresponding to the facial features in the emoticon package picture, and generating a new personalized emoticon package.
在一实施例中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:In an embodiment, the matching an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determining the location of the facial features of the matched emoticon package picture includes:
自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;所述预设表情包库中的表情包图片可以是从专门制作表情包的第三方合法获取的表情包,且该预设表情包库中的表情包图片均需要具有五官部位(也即脸部,可以为部分脸部或者全部)。All the emoticon pictures with the same emoticon label and the face image are selected from the preset emoticon package library; the emoticon package pictures in the preset emoticon package library may be legally obtained from a third party that specializes in making emoticon packages The emoticon package, and the emoticon package pictures in the preset emoticon package library all need to have facial features (that is, the face, which may be part of the face or all).
根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;可理解地,所述筛选规则可以是随机选取,也可以是指根据使用频率进行选取,比如,可以选取用户的个人使用频率最高的表情包图片,也即用户使用该表情包图片的频率越高,则其被选取的几率越大。同理,所述筛选规则亦可以为首先统计所述预设表情包库中的上述与该表情标签对应的所有表情包图片被所有用户的总使用次数,并选取其中总使用次数最高的表情包图片,进一步地,所述筛选规则也可以将该总使用次数通过预设的换算规则(一个换算规则中包含不同范围的总使用次数与不同流行度之间的关联关系,一个总使用次数仅能对应一个流行度)转换为流行度,此时同理,流行度越高,其被选取的几率越高。According to the preset screening rules, from all the selected emoticon pack pictures, the emoticon pack pictures that uniquely match the face images are determined; understandably, the screening rules can be selected randomly or refer to The selection is based on the frequency of use. For example, the emoticon picture with the highest personal use frequency of the user can be selected, that is, the more frequently the user uses the emoticon picture, the greater the probability that it will be selected. In the same way, the filtering rule may also be to first count the total usage times of all the emoticon package pictures corresponding to the emoticon tag in the preset emoticon package library by all users, and select the emoticon package with the highest total usage count. Further, the screening rule can also pass the total usage count through a preset conversion rule (a conversion rule contains the association relationship between the total usage count in different ranges and different popularity, and a total usage count can only Corresponding to a popularity) is converted to popularity. In this case, the higher the popularity, the higher the probability of being selected.
将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。The emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
在该实施例中,可以根据所述筛选规则确认与所述人脸图像唯一匹配的所述表情包图片,该筛选规则可以根据用户使用次数等进行确认,如此,可以根据用户喜好去选取用户待生成的个性化表情包的模板,如此,生成的个性化表情包会更符合用户的使用习惯,带给用户更好的使用体验。In this embodiment, the emoticon package picture that uniquely matches the face image can be confirmed according to the screening rule. The screening rule can be confirmed according to the number of times the user has used it. In this way, the user can be selected according to user preferences. The generated personalized emoticon package template, in this way, the generated personalized emoticon package will be more in line with the user's usage habits and bring a better user experience.
S40,提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。S40. Extract facial features in the face image, and overlay the facial features on the positions of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personality Emoji package.
在该实施例中,当自预设表情包库中选取与所述人脸图像的表情标签匹配的唯一的一个表情包图片之后,需要确定该表情包图片中的五官部位的所处位置,进而将所述人脸图像中提取出来的面部特征,替换原有的该五官部位的所处位置轮廓(指五官部位对应位置的轮廓)中的图像内容,进而将替换有该人脸图像的面部特征的的上述表情包图片合成新的个性化表情包(比如,表情包图片中主体为一只卖萌的小猫,此时,将猫脸部位替换为人脸图像的面部特征,生成的个性化表情包即为卖萌的具有人脸图像的面部特征的小猫)。In this embodiment, after selecting the only emoticon package picture matching the emoticon tag of the face image from the preset emoticon package library, the location of the facial features in the emoticon package picture needs to be determined, and then The facial features extracted from the face image are replaced with the original image content in the contour of the position of the facial features (referring to the contour of the corresponding position of the facial features), and then the facial features of the facial image will be replaced The above-mentioned emoticon package pictures are synthesized into a new personalized emoticon package (for example, the main body of the emoticon package picture is a cute kitten. In this case, replace the cat’s face with the facial features of the face image to generate a personalized expression The bag is a cute kitten with facial features of a face image).
在一实施例中,如图5所示,所述步骤S40中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:In one embodiment, as shown in FIG. 5, in the step S40, the extracting facial features in the face image, and overlaying the facial features to match from the preset emoticon library The location of the facial features of the emoticon package picture includes:
S401,获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;其中,所述五官部位的所处位置轮廓是指所述表情包图片中的五官部位所占据的面积的边缘轮廓;所述五官部位的整体放置角度是指所述五官部位倾斜角度的大小、正置或倒置,该整体放置角度可以以其中一个或多个五官为基准来进行确定,比如以一只眼睛的相对两个眼角之间连成的直线与水平线之间的角度来确定其倾斜角度,再以鼻子或嘴巴是否位于眼睛的下方(同理,亦可以嘴巴是否位于鼻子下方等条件来确定)来确定其为正置或倒置(鼻子或嘴巴位于眼睛的下方为正置、否则为倒置);所述五官部位轮廓面积是指该所处位置轮廓的总面积。S401. Acquire the location contour of the facial features, the overall placement angle of the facial features, and the contour area of the facial features of the facial expression package picture matched in the preset facial expression package library; The location contour of the facial features refers to the edge contour of the area occupied by the facial features in the emoticon package picture; the overall placement angle of the facial features refers to the size, upright or inverted angle of the facial features , The overall placement angle can be determined based on one or more of the five senses, for example, the angle between the straight line and the horizontal line between the opposite corners of an eye is used to determine its inclination angle, and then the nose Or whether the mouth is under the eyes (similarly, it can also be determined by whether the mouth is under the nose, etc.) to determine whether it is upright or upside down (the nose or mouth under the eyes is upright, otherwise it is upside down); The contour area of the facial features refers to the total area of the contour of the location.
S402,提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各 所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;其中,所述面部特征包括但不限定于为耳、眉、眼、鼻、口等。各所述面部特征中心点之间的位置关系是指各所述面部特征中心点之间的距离远近、相对方位等。S402: Extract all facial features within the contour of the face in the face image, and determine the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features ; Wherein, the facial features include but are not limited to ears, eyebrows, eyes, nose, mouth, etc. The positional relationship between the center points of the facial features refers to the distance, relative orientation, etc. between the center points of the facial features.
S403,新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;也即,所述画布的画布轮廓与所述五官部位的所处位置轮廓可以完全重叠。所述预设的图像处理方式包括但不限定于为对所述面部特征进行透明度调整、调色处理等,以使得生成的个性化表情包更为自然美观。S403: Create a new canvas, where the contour of the canvas is consistent with the contour of the position of the facial features; preprocess the facial features according to a preset image processing method; that is, the contour of the canvas is consistent with the position of the facial features. The location contours of the five sense organs can be completely overlapped. The preset image processing methods include, but are not limited to, performing transparency adjustment and color toning processing on the facial features, so as to make the generated personalized emoticon package more natural and beautiful.
S404,在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;也即,在所有面部特征放置进入所述画布轮廓中时,需要保持其相对之间的位置关系不变,进而维持人脸图像的表情不变(若各面部特征之间的相对位置关系变化,则变化之后的面部特征组成的人脸图像表情与此前的人脸图像对应的表情可能会发生变化)。可理解地,可以以所有所述面部特征的中心位置以及所述画布轮廓的中心位置为对位点,将所有面部特征放置进入所述画布轮廓中;且在所有所述面部特征之间组成的整体与所述整体放置角度不一致时,需要将其调整至与所述整体放置角度一致之后,再将所有面部特征放置进入所述画布轮廓中。S404, under the condition that the positional relationship between the center points of the facial features remains unchanged, placing all the facial features after preprocessing in the canvas contour of the canvas according to the overall placement angle; also That is, when all facial features are placed into the outline of the canvas, the relative positional relationship between them needs to be kept unchanged, and then the expression of the face image remains unchanged (if the relative positional relationship between facial features changes, then The facial image composed of the changed facial features may change with the facial expression corresponding to the previous facial image). Understandably, the center position of all the facial features and the center position of the canvas outline can be used as the counterpoint to place all the facial features into the canvas outline; and the composition between all the facial features When the overall placement angle is inconsistent with the overall placement angle, it needs to be adjusted to be consistent with the overall placement angle before placing all facial features into the canvas outline.
S405,以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;也即,通过同一比例统一调整各所述面部特征中心点之间的直线距离,来整体调整各所述面部特征的大小;且调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围(可根据需求设定)内时,各面部特征在该五官部位上的大小将会相对协调,否则,面部特征在画布轮廓中可能过大或者过小而不协调。也即,上述同一比例需要根据“调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所 述五官部位轮廓面积之间的比值处于预设比值范围”该条件进行选取(可预先按照优先级别排列并存储在数据库中,并由服务器自动从数据库中筛选符合上述条件的同一比列),若使得所述比值处于预设比值范围的所述同一比例有多个选择,则可以随机选取一个或者根据优先级别的先后顺序选取。S405. Adjust the linear distance between the center points of the facial features at the same ratio, and adjust the figure area surrounded by the outermost facial feature among the facial features after adjusting the same ratio, and The ratio between the contour areas of the facial features is within a preset ratio range; that is, the linear distance between the center points of the facial features is uniformly adjusted by the same ratio to adjust the size of each facial feature as a whole; and When the ratio of the figure area enclosed by the outermost facial feature among the adjusted facial features to the contour area of the facial features is within a preset ratio range (which can be set according to requirements), each The size of the facial features on the five sense organs will be relatively coordinated, otherwise, the facial features may be too large or too small in the canvas contour. That is, the above-mentioned same ratio needs to be based on "the ratio of the figure area enclosed by the outermost facial feature among the facial features after adjustment to the contour area of the facial features is within the preset ratio range" Conditions are selected (it can be arranged in advance and stored in the database according to the priority level, and the server automatically selects the same ratio from the database that meets the above conditions), if the ratio is within the preset ratio range, how many of the same ratios Choose one, you can choose one randomly or according to the order of priority.
S406,将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;也即,由于所述画布的画布轮廓与所述五官部位的所处位置轮廓可以完全重叠,因此在该步骤中直接用所述画布替换原有的该五官部位的所处位置轮廓中的图像内容即可。S406: Cover the canvas containing the facial features on the contour of the location of the facial features of the emoticon package picture matched from the preset emoticon package library; that is, due to The canvas outline of the canvas and the location outline of the facial features can completely overlap, so in this step, the canvas can directly replace the original image content in the outline of the facial features.
S407,对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。所述图像合成处理包括但不限定于为对覆盖有所述面部特征的所述表情包图片合并为同一张图片并进行统一曝光调色处理,使其更加自然。S407: Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package. The image synthesis processing includes, but is not limited to, merging the emoticon package pictures covered with the facial features into the same picture and performing unified exposure and toning processing to make it more natural.
在一实施例中,如图6所示,所述步骤S407中,所述生成所述个性化表情包,包括:In an embodiment, as shown in FIG. 6, in step S407, generating the personalized emoticon package includes:
S4071,接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;其中,所述文本添加指令是指在步骤S407中进行图像合成处理之后,若用户还想要在个性化表情包中自主录入表情文本时,此时,可以通过点击、滑动等方式触发预设按键之后将文本添加指令发送至服务器;所述表情文本就是指用户想要为所述个性化表情包配置的文本;所述文本框编号是指在个性化表情包中可以被加入的文本框的唯一标识,每一个文本框编号对应于一个文本框的样式。S4071: Receive a text adding instruction, and obtain the emoticon text entered by the user and the text box number selected by the user; wherein, the text adding instruction refers to the image synthesis process performed in step S407, if the user also wants to personalize When the emoticon text is independently entered in the emoticon package, at this time, the preset button can be triggered by clicking, sliding, etc., and then the text adding instruction is sent to the server; the emoticon text refers to what the user wants to configure for the personalized emoticon package Text; the text box number refers to the unique identifier of the text box that can be added to the personalized emoticon package, and each text box number corresponds to the style of a text box.
S4072,获取与所述文本框编号关联的文本框大小以及默认文本格式;也即,每一个文本框编号均具有一个可以被填入表情文本的文本框大小,且每一个文本框均对应一个默认文本格式,若用户并没有修改所述默认文本格式,则所述表情文本将以该默认文本格式填入所述文本框中。S4072. Obtain the text box size and default text format associated with the text box number; that is, each text box number has a text box size that can be filled with emoticon text, and each text box corresponds to a default Text format, if the user does not modify the default text format, the emoticon text will be filled in the text box in the default text format.
S4073,获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;也即,可以通过对于表情文本的字符数量(也即字符长度)的判断,对字符大小等进行自动调整;可理解地,所述默认文本格式中除所述字符大小之外的其他文本格式项目亦可以根据需求进 行调整。S4073: Obtain the number of characters in the emoticon text, and adjust the character size in the default text format according to the number of characters and the size of the text box; that is, the number of characters in the emoticon text (ie, the character length ), the character size is automatically adjusted; understandably, other text format items other than the character size in the default text format can also be adjusted according to requirements.
S4074,在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;也即,在用户调整所述默认文本格式之后,所述表情文本将以调整之后的该默认文本格式填入所述文本框中。S4074: Generate a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and fill in emoticon text in the text box according to the adjusted default text format ; That is, after the user adjusts the default text format, the emoticon text will be filled into the text box in the adjusted default text format.
S4075,对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。所述拼装处理是指将所述文本框与进行图像合成处理之后的所述表情包图片合并同一个个性化表情包。S4075: After assembling the emoticon package picture and the text box, generate the personalized emoticon package. The assembling process refers to merging the text box and the emoticon package picture after image synthesis processing into the same personalized emoticon package.
也即,在上述实施例中,还支持用户自定义输入表情文本,并通过对于表情文本的字符数量(也即字符长度)的判断,对字符大小等进行自动调整,并将填入表情字符的所述文本框与表情包图片自动拼装为个性化表情包。可理解地,同样可在所述个性化表情包上添加道具效果,比如增加,心形、帽子、星星等效果的道具。That is, in the above-mentioned embodiment, it also supports user-defined input of emoticon text, and automatically adjusts the character size and so on by judging the number of characters of emoticon text (that is, the character length), and fills in the emoticon characters. The text box and the emoticon package picture are automatically assembled into a personalized emoticon package. Understandably, it is also possible to add prop effects to the personalized emoticon package, such as adding props with effects such as hearts, hats, and stars.
在一实施例中,如图7所示,提供一种表情包自动生成装置,该表情包自动生成装置与上述实施例中表情包自动生成方法一一对应。所述表情包自动生成装置包括:In one embodiment, as shown in FIG. 7, an apparatus for automatically generating emoticons is provided, and the apparatus for automatically generating emoticons corresponds to the method for automatically generating emoticons in the above-mentioned embodiment one-to-one. The device for automatically generating emoticons includes:
获取模块11,用于获取人脸图像;The obtaining module 11 is used to obtain a face image;
提取模块12,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;The extraction module 12 is configured to extract facial micro-expressions from the facial images, and obtain the facial expression tags of the facial images according to the facial micro-expressions;
匹配模块13,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;The matching module 13 is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset It is assumed that the emoticon package pictures in the emoticon package library have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
覆盖模块14,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。The covering module 14 is used to extract the facial features in the face image, and cover the facial features to the facial features of the emoticon package picture matched from the preset emoticon package library. Location, generate personalized emoticons.
在一实施例中,如图8所示,所述提取模块12包括:In an embodiment, as shown in FIG. 8, the extraction module 12 includes:
提取单元121,用于自所述人脸图像中提取人脸微表情的所有动作单元类型;The extraction unit 121 is configured to extract all the action unit types of the facial micro-expression from the facial image;
确认单元122,用于根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;The confirming unit 122 is configured to confirm the micro-expression type of the face image based on all the action unit types extracted from the face image;
获取单元123,用于获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;The acquiring unit 123 is configured to acquire all the emoticon tags associated with the micro-expression type, and at the same time acquire the characteristic action unit associated with each emoticon tag;
匹配单元124,用于将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。The matching unit 124 is configured to match all the action unit types extracted from the face image with the characteristic action units associated with each expression tag, and extract all the action units from the face image. When the action unit type includes all the characteristic action units associated with the expression tag, the expression tag is recorded as the expression tag of the face image.
关于表情包自动生成装置的具体限定可以参见上文中对于表情包自动生成方法的限定,在此不再赘述。上述表情包自动生成装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific limitation of the emoticon package automatic generating device, please refer to the above limitation on the emoticon package automatic generating method, which will not be repeated here. Each module in the above-mentioned emoticon package automatic generating device can be implemented in whole or in part by software, hardware, and a combination thereof. The foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。。该计算机可读指令被处理器执行时以实现一种表情包自动生成方法。In one embodiment, a computer device is provided. The computer device may be a server, and its internal structure diagram may be as shown in FIG. 9. The computer equipment includes a processor, a memory, a network interface and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. . When the computer-readable instructions are executed by the processor, a method for automatically generating emoticons is realized.
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时至少实现以下步骤:In one embodiment, a computer device is provided, including a memory, a processor, and computer-readable instructions stored on the memory and capable of running on the processor, and the processor implements at least the following steps when executing the computer-readable instructions:
获取人脸图像;Obtain face images;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所 述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
在一个实施例中,提供了一种计算机可读存储介质,所述计算机可读存储介质为易失性存储介质或非易失性存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时至少实现以下步骤:In one embodiment, a computer-readable storage medium is provided. The computer-readable storage medium is a volatile storage medium or a non-volatile storage medium, and computer-readable instructions are stored thereon. At least the following steps are implemented when executed by the processor:
获取人脸图像;Obtain face images;
自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、 增强型SDRAM(ESDRAM)、同步链路DRAM(SLDRAM)、存储器总线直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through computer-readable instructions, which can be stored in a non-volatile computer. In a readable storage medium, when the computer-readable instructions are executed, they may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
发明概述Summary of the invention
技术问题technical problem
问题的解决方案The solution to the problem
发明的有益效果The beneficial effects of the invention

Claims (20)

  1. 一种表情包自动生成方法,其中,包括:A method for automatically generating emoticons, which includes:
    获取人脸图像;Obtain face images;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
  2. 如权利要求1所述的表情包自动生成方法,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:The method for automatically generating an emoticon package according to claim 1, wherein the extracting the facial micro-expression from the facial image and obtaining the facial expression tag of the facial image according to the facial micro-expression comprises:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;Extract all the action unit types of the face micro-expression from the face image;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;Confirm the micro-expression type of the face image according to the types of all the action units extracted from the face image;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;Acquiring all the emoticon tags associated with the micro-expression type, and simultaneously acquiring the characteristic action unit associated with each emoticon tag;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。All the action unit types extracted from the face image are matched with the characteristic action units associated with each expression tag, and one is included in all the action unit types extracted from the face image When all the characteristic action units associated with the expression tag are associated, the expression tag is recorded as the expression tag of the face image.
  3. 如权利要求1所述表情包自动生成的方法,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The method for automatically generating an emoticon package according to claim 1, wherein the emoticon package picture is matched from a preset emoticon package library according to the emoticon tag of the face image, and the facial features of the matched emoticon package picture are determined The location of the site, including:
    获取自所述人脸图像中提取的人脸脸部轮廓;Acquiring a face contour extracted from the face image;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;Determining the location of the facial features of each selected emoticon package picture and the contour of the location of the facial features;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;Acquiring the similarity between the contour of the human face and the contour of the location;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。Record the emoticon package picture corresponding to the location contour with the highest similarity as the emoticon package picture that uniquely matches the face image, and at the same time obtain all the emoticon package pictures that uniquely match the face image State the location of the facial features of the emoticon picture.
  4. 如权利要求1所述的表情包自动生成方法,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The method for automatically generating an emoticon package according to claim 1, wherein the emoticon package picture is matched from a preset emoticon package library according to the emoticon tag of the face image, and the facial features of the matched emoticon package picture are determined The location of the site, including:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;According to preset screening rules, determine the emoticon package picture uniquely matching the face image from all the selected emoticon package pictures;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。The emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
  5. 如权利要求1所述的表情包自动生成方法,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:The method for automatically generating emoticons according to claim 1, wherein said extracting facial features in said face image, and overlaying said facial features on said matching from said preset emoticon library The location of the facial features of the emoticon picture includes:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;Obtained from the location contour of the facial features, the overall placement angle of the facial features, and the contour area of the facial features of the facial expression package picture matched from the preset facial expression package library;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并 确定各所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;Extracting all facial features located within the contour of the face in the face image, and determining the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;Create a new canvas, the canvas contour of the canvas is consistent with the contour of the position of the facial features; preprocessing the facial features according to a preset image processing method;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;While keeping the positional relationship between the center points of the facial features unchanged, placing all the facial features after preprocessing in the canvas outline of the canvas according to the overall placement angle;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;Adjust the linear distance between the center points of the facial features at the same ratio, and adjust the area of the figure enclosed by the outermost facial features of the facial features after adjusting the same ratio with the The ratio between the contour areas of the facial features is within the preset ratio range;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;Correspondingly overlay the canvas containing the facial features on the contour of the position of the facial features of the emoticon package picture matched from the preset emoticon package library;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package.
  6. 如权利要求1所述的表情包自动生成方法,其中,所述生成所述个性化表情包,包括:8. The method for automatically generating emoticons according to claim 1, wherein said generating said personalized emoticons comprises:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;Receiving a text adding instruction, obtaining the emoticon text entered by the user and the text box number selected by the user;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;Obtaining the text box size and the default text format associated with the text box number;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;Acquiring the number of characters of the emoticon text, and adjusting the character size in the default text format according to the number of characters and the size of the text box;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;Generating a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and filling in emoticon text in the text box according to the adjusted default text format;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。After assembling the emoticon package picture and the text box, the personalized emoticon package is generated.
  7. 一种表情包自动生成装置,其中,包括:A device for automatically generating emoticons, which includes:
    获取模块,用于获取人脸图像;The acquisition module is used to acquire a face image;
    提取模块,用于自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;An extraction module for extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
    匹配模块,用于根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;The matching module is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset The emoticon package pictures in the emoticon package library all have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
    覆盖模块,用于提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。An overlay module for extracting facial features in the face image, and overlaying the facial features to the location of the facial features of the emoticon package picture matched from the preset emoticon package library , Generate personalized emoticons.
  8. 如权利要求7所述的表情包自动生成装置,其中,所述提取模块包括:8. The device for automatically generating emoticons according to claim 7, wherein the extraction module comprises:
    提取单元,用于自所述人脸图像中提取人脸微表情的所有动作单元类型;An extraction unit for extracting all the action unit types of the facial micro-expression from the facial image;
    确认单元,用于根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;A confirming unit, configured to confirm the micro-expression type of the face image based on all the action unit types extracted from the face image;
    获取单元,用于获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;An acquiring unit, configured to acquire all the emoticon tags associated with the micro-expression type, and at the same time acquire a characteristic action unit associated with each emoticon tag;
    匹配单元,用于将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。The matching unit is configured to match all the action unit types extracted from the face image with the characteristic action units associated with each expression tag, and extract all the actions from the face image When the unit type includes all the characteristic action units associated with the expression tag, the expression tag is recorded as the expression tag of the face image.
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现一种表情包自动生成方法,其中 ,所述表情包自动生成方法包括:A computer device including a memory, a processor, and computer-readable instructions stored in the memory and running on the processor, wherein the processor implements an expression when the computer-readable instructions are executed A method for automatically generating a package, wherein the method for automatically generating an emoticon package includes:
    获取人脸图像;Obtain face images;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
  10. 如权利要求9所述的计算机设备,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:9. The computer device according to claim 9, wherein said extracting the facial micro-expression from the facial image and obtaining the facial expression tag of the facial image according to the facial micro-expression comprises:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;Extract all the action unit types of the face micro-expression from the face image;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;Confirm the micro-expression type of the face image according to the types of all the action units extracted from the face image;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;Acquiring all the emoticon tags associated with the micro-expression type, and simultaneously acquiring the characteristic action unit associated with each emoticon tag;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。All the action unit types extracted from the face image are matched with the characteristic action units associated with each expression tag, and one is included in all the action unit types extracted from the face image When all the characteristic action units associated with the expression tag are associated, the expression tag is recorded as the expression tag of the face image.
  11. 如权利要求9所述的计算机设备,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The computer device according to claim 9, wherein the emoticon package picture is matched from a preset emoticon package library according to the emoticon tag of the face image, and all the facial features of the matched emoticon package picture are determined Locations, including:
    获取自所述人脸图像中提取的人脸脸部轮廓;Acquiring a face contour extracted from the face image;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;Determining the location of the facial features of each selected emoticon package picture and the contour of the location of the facial features;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;Acquiring the similarity between the contour of the human face and the contour of the location;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。Record the emoticon package picture corresponding to the location contour with the highest similarity as the emoticon package picture that uniquely matches the face image, and at the same time obtain all the emoticon package pictures that uniquely match the face image State the location of the facial features of the emoticon picture.
  12. 如权利要求9所述的计算机设备,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The computer device according to claim 9, wherein the emoticon package picture is matched from a preset emoticon package library according to the emoticon tag of the face image, and all the facial features of the matched emoticon package picture are determined Locations, including:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;According to preset screening rules, determine the emoticon package picture uniquely matching the face image from all the selected emoticon package pictures;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。The emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
  13. 如权利要求9所述的计算机设备,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:8. The computer device according to claim 9, wherein said extracting facial features in said face image, and overlaying said facial features on said emoticon package picture matched from said preset emoticon package library The location of the five sense organs includes:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;Obtained from the location contour of the facial features, the overall placement angle of the facial features, and the contour area of the facial features of the facial expression package picture matched from the preset facial expression package library;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各所述面部特征中心点之间的位置关系,以及各所述面部特征中心点之间的直线距离;Extracting all facial features located within the contour of the face in the face image, and determining the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;Create a new canvas, the canvas contour of the canvas is consistent with the contour of the position of the facial features; preprocessing the facial features according to a preset image processing method;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;While keeping the positional relationship between the center points of the facial features unchanged, placing all the facial features after preprocessing in the canvas outline of the canvas according to the overall placement angle;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;Adjust the linear distance between the center points of the facial features at the same ratio, and adjust the area of the figure enclosed by the outermost facial features of the facial features after adjusting the same ratio with the The ratio between the contour areas of the facial features is within the preset ratio range;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;Correspondingly overlay the canvas containing the facial features on the contour of the position of the facial features of the emoticon package picture matched from the preset emoticon package library;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package.
  14. 如权利要求1所述的计算机设备,其中,所述生成所述个性化表情包,包括:5. The computer device of claim 1, wherein said generating said personalized emoticon package comprises:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;Receiving a text adding instruction, obtaining the emoticon text entered by the user and the text box number selected by the user;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;Obtaining the text box size and the default text format associated with the text box number;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;Acquiring the number of characters of the emoticon text, and adjusting the character size in the default text format according to the number of characters and the size of the text box;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;Generating a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and filling in emoticon text in the text box according to the adjusted default text format;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。After assembling the emoticon package picture and the text box, the personalized emoticon package is generated.
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现一 种表情包自动生成方法,其中,所述表情包自动生成方法包括:A computer-readable storage medium, the computer-readable storage medium stores computer-readable instructions, wherein when the computer-readable instructions are executed by a processor, a method for automatically generating emoticons is realized, wherein Automatic generation methods include:
    获取人脸图像;Obtain face images;
    自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签;Extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
    根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置;其中,所述预设表情包库中的所述表情包图片均具有至少一个五官部位,且每一个所述表情包图片均与至少一个所述表情标签关联;According to the facial expression tag of the face image, the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
    提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,生成个性化表情包。Extract the facial features in the face image, and overlay the facial features on the location of the facial features of the emoticon package picture matched from the preset emoticon package library to generate a personalized emoticon package.
  16. 如权利要求15所述的计算机可读存储介质,其中,所述自所述人脸图像中提取人脸微表情,并根据所述人脸微表情获取所述人脸图像的表情标签,包括:15. The computer-readable storage medium according to claim 15, wherein the extracting the facial micro-expression from the facial image and obtaining the facial expression tag of the facial image according to the facial micro-expression comprises:
    自所述人脸图像中提取人脸微表情的所有动作单元类型;Extract all the action unit types of the face micro-expression from the face image;
    根据自所述人脸图像中提取所有所述动作单元类型确认所述人脸图像的微表情类型;Confirm the micro-expression type of the face image according to the types of all the action units extracted from the face image;
    获取所述微表情类型关联的所有所述表情标签,同时获取与每一个所述表情标签关联的特征动作单元;Acquiring all the emoticon tags associated with the micro-expression type, and simultaneously acquiring the characteristic action unit associated with each emoticon tag;
    将自所述人脸图像中提取所有所述动作单元类型与每一个所述表情标签关联的所述特征动作单元进行匹配,在自所述人脸图像中提取所有所述动作单元类型中包含一个所述表情标签关联的所有所述特征动作单元时,将所述表情标签记录为所述人脸图像的表情标签。All the action unit types extracted from the face image are matched with the characteristic action units associated with each expression tag, and one is included in all the action unit types extracted from the face image When all the characteristic action units associated with the expression tag are associated, the expression tag is recorded as the expression tag of the face image.
  17. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The computer-readable storage medium of claim 15, wherein the facial image is matched with an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and the facial features of the matched emoticon package picture are determined The location of the site, including:
    获取自所述人脸图像中提取的人脸脸部轮廓;Acquiring a face contour extracted from the face image;
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    确定选取的各所述表情包图片的五官部位的所处位置以及所述五官部位的所处位置轮廓;Determining the location of the facial features of each selected emoticon package picture and the contour of the location of the facial features;
    获取所述人脸脸部轮廓与所述所处位置轮廓之间的相似度;Acquiring the similarity between the contour of the human face and the contour of the location;
    将所述相似度最高的所述所处位置轮廓对应的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时获取与所述人脸图像唯一匹配的所述表情包图片的五官部位的所处位置。Record the emoticon package picture corresponding to the location contour with the highest similarity as the emoticon package picture that uniquely matches the face image, and at the same time obtain all the emoticon package pictures that uniquely match the face image State the location of the facial features of the emoticon picture.
  18. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述人脸图像的表情标签自预设表情包库中匹配表情包图片,并确定匹配到的所述表情包图片的五官部位的所处位置,包括:The computer-readable storage medium of claim 15, wherein the facial image is matched with an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and the facial features of the matched emoticon package picture are determined The location of the site, including:
    自预设表情包库中选取所述表情标签与所述人脸图像相同的所有表情包图片;Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library;
    根据预设的筛选规则,自选取的所有所述表情包图片中确定与所述人脸图像唯一匹配的所述表情包图片;According to a preset screening rule, determine the emoticon package picture that uniquely matches the face image from all the selected emoticon package pictures;
    将根据所述筛选规则确定的所述表情包图片,记录为与所述人脸图像唯一匹配的所述表情包图片,同时自与所述人脸图像唯一匹配的所述表情包图片中提取其五官部位的所处位置。The emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
  19. 如权利要求15所述的计算机可读存储介质,其中,所述提取所述人脸图像中的面部特征,并将所述面部特征覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置,包括:The computer-readable storage medium of claim 15, wherein the extracting facial features in the face image, and overlaying the facial features on the matching from the preset emoticon library The location of the facial features of the emoticon picture includes:
    获取自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓、所述五官部位的整体放置角度、所述五官部位轮廓面积;Obtained from the location contour of the facial features, the overall placement angle of the facial features, and the contour area of the facial features of the facial expression package picture matched from the preset facial expression package library;
    提取所述人脸图像中位于人脸脸部轮廓之内的所有面部特征,并确定各所述面部特征中心点之间的位置关系,以及各所述面部特 征中心点之间的直线距离;Extracting all facial features located within the contour of the face in the face image, and determining the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features;
    新建画布,所述画布的画布轮廓与所述五官部位的所处位置轮廓一致;将所述面部特征根据预设的图像处理方式进行预处理;Create a new canvas, the canvas contour of the canvas is consistent with the contour of the position of the facial features; preprocessing the facial features according to a preset image processing method;
    在保持所述面部特征中心点之间的位置关系不变的情况下,将将进行预处理之后的所有所述面部特征按照所述整体放置角度放置在所述画布的画布轮廓中;While keeping the positional relationship between the center points of the facial features unchanged, placing all the facial features after preprocessing in the canvas outline of the canvas according to the overall placement angle;
    以同一比列调整各所述面部特征中心点之间的直线距离,且根据所述同一比例调整之后的各所述面部特征中的最外围的所述面部特征围成的图形面积,与所述五官部位轮廓面积之间的比值处于预设比值范围内;Adjust the linear distance between the center points of the facial features at the same ratio, and adjust the area of the figure enclosed by the outermost facial features of the facial features after adjusting the same ratio with the The ratio between the contour areas of the facial features is within the preset ratio range;
    将包含所述面部特征的所述画布,对应覆盖到自所述预设表情包库中匹配到的所述表情包图片的所述五官部位的所处位置轮廓上;Correspondingly overlay the canvas containing the facial features on the contour of the location of the facial features of the emoticon package picture matched from the preset emoticon package library;
    对覆盖有所述画布轮廓的所述表情包图片进行图像合成处理,生成所述个性化表情包。Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package.
  20. 如权利要求15所述的计算机可读存储介质,其中,所述生成所述个性化表情包,包括:15. The computer-readable storage medium of claim 15, wherein said generating said personalized emoticon package comprises:
    接收文本添加指令,获取用户录入的表情文本、所述用户选取的文本框编号;Receiving a text adding instruction, obtaining the emoticon text entered by the user and the text box number selected by the user;
    获取与所述文本框编号关联的文本框大小以及默认文本格式;Obtaining the text box size and the default text format associated with the text box number;
    获取所述表情文本的字符数量,并根据所述字符数量以及所述文本框大小调整所述默认文本格式中的字符大小;Acquiring the number of characters of the emoticon text, and adjusting the character size in the default text format according to the number of characters and the size of the text box;
    在所述表情包图片中的预设位置或用户选定位置生成与所述文本框编号对应的文本框,并在所述文本框中按照调整之后的所述默认文本格式填入表情文本;Generating a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and filling in emoticon text in the text box according to the adjusted default text format;
    对所述表情包图片与所述文本框进行拼装处理之后,生成所述个性化表情包。After assembling the emoticon package picture and the text box, the personalized emoticon package is generated.
PCT/CN2020/085573 2019-07-05 2020-04-20 Automatic meme generation method and apparatus, computer device and storage medium WO2021004114A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910602401.7 2019-07-05
CN201910602401.7A CN110458916A (en) 2019-07-05 2019-07-05 Expression packet automatic generation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2021004114A1 true WO2021004114A1 (en) 2021-01-14

Family

ID=68482133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085573 WO2021004114A1 (en) 2019-07-05 2020-04-20 Automatic meme generation method and apparatus, computer device and storage medium

Country Status (2)

Country Link
CN (1) CN110458916A (en)
WO (1) WO2021004114A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium
CN113177994A (en) * 2021-03-25 2021-07-27 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113485596A (en) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 Virtual model processing method and device, electronic equipment and storage medium
CN117150063A (en) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 Image generation method and system based on scene recognition

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458916A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Expression packet automatic generation method, device, computer equipment and storage medium
CN110889379B (en) * 2019-11-29 2024-02-20 深圳先进技术研究院 Expression package generation method and device and terminal equipment
CN111145283A (en) * 2019-12-13 2020-05-12 北京智慧章鱼科技有限公司 Expression personalized generation method and device for input method
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN111368127B (en) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112102157A (en) * 2020-09-09 2020-12-18 咪咕文化科技有限公司 Video face changing method, electronic device and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112214632B (en) * 2020-11-03 2023-11-17 虎博网络技术(上海)有限公司 Text retrieval method and device and electronic equipment
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
CN113727024B (en) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 Method, device, electronic equipment and storage medium for generating multimedia information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN107219917A (en) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 Emoticon generation method and device, computer equipment and computer-readable recording medium
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN108197206A (en) * 2017-12-28 2018-06-22 努比亚技术有限公司 Expression packet generation method, mobile terminal and computer readable storage medium
CN110458916A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Expression packet automatic generation method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573527B (en) * 2018-04-18 2020-02-18 腾讯科技(深圳)有限公司 Expression picture generation method and equipment and storage medium thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN107219917A (en) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 Emoticon generation method and device, computer equipment and computer-readable recording medium
CN108197206A (en) * 2017-12-28 2018-06-22 努比亚技术有限公司 Expression packet generation method, mobile terminal and computer readable storage medium
CN110458916A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Expression packet automatic generation method, device, computer equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium
US11922725B2 (en) 2021-02-20 2024-03-05 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and device for generating emoticon, and storage medium
CN113177994A (en) * 2021-03-25 2021-07-27 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113177994B (en) * 2021-03-25 2022-09-06 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113485596A (en) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 Virtual model processing method and device, electronic equipment and storage medium
CN113485596B (en) * 2021-07-07 2023-12-22 游艺星际(北京)科技有限公司 Virtual model processing method and device, electronic equipment and storage medium
CN117150063A (en) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 Image generation method and system based on scene recognition
CN117150063B (en) * 2023-10-26 2024-02-06 深圳慢云智能科技有限公司 Image generation method and system based on scene recognition

Also Published As

Publication number Publication date
CN110458916A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2021004114A1 (en) Automatic meme generation method and apparatus, computer device and storage medium
US11455729B2 (en) Image processing method and apparatus, and storage medium
US11861936B2 (en) Face reenactment
JP6662876B2 (en) Avatar selection mechanism
WO2016177290A1 (en) Method and system for generating and using expression for virtual image created through free combination
JP6972043B2 (en) Information processing equipment, information processing methods and programs
US20210374839A1 (en) Generating augmented reality content based on third-party content
WO2020211347A1 (en) Facial recognition-based image modification method and apparatus, and computer device
US11961169B2 (en) Digital makeup artist
US11776187B2 (en) Digital makeup artist
CN111091448A (en) Clothing pre-customization method and device, computer equipment and storage medium
WO2015184903A1 (en) Picture processing method and device
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system
KR101757184B1 (en) System for automatically generating and classifying emotionally expressed contents and the method thereof
CN115222899B (en) Virtual digital human generation method, system, computer device and storage medium
WO2023138345A1 (en) Virtual image generation method and system
WO2022257766A1 (en) Image processing method and apparatus, device, and medium
CN114841851A (en) Image generation method, image generation device, electronic equipment and storage medium
CN114443182A (en) Interface switching method, storage medium and terminal equipment
CN117391805A (en) Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium
CN115554701A (en) Control method and device of virtual role, computer equipment and storage medium
KR20230118191A (en) digital makeup artist
CN117351121A (en) Digital person editing control method, device, electronic equipment and storage medium
CN117036150A (en) Image acquisition method, device, electronic equipment and readable storage medium
CN117830527A (en) Digital person customizable portrait implementing method, system and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1