WO2021004114A1 - Procédé et appareil de génération automatique de mèmes, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de génération automatique de mèmes, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2021004114A1
WO2021004114A1 PCT/CN2020/085573 CN2020085573W WO2021004114A1 WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1 CN 2020085573 W CN2020085573 W CN 2020085573W WO 2021004114 A1 WO2021004114 A1 WO 2021004114A1
Authority
WO
WIPO (PCT)
Prior art keywords
emoticon
facial features
facial
face image
package
Prior art date
Application number
PCT/CN2020/085573
Other languages
English (en)
Chinese (zh)
Inventor
向纯玉
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021004114A1 publication Critical patent/WO2021004114A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • This application relates to the field of micro-expression recognition in artificial intelligence, and in particular to a method, device, computer equipment and storage medium for automatically generating an expression pack.
  • the embodiments of this application provide a method, device, computer equipment and storage medium for automatically generating emoticons.
  • the operation of this application is simple, the fusion effect and consistency of the generated personalized emoticons are better, the user experience is improved, and the user is also improved. Activity and participation.
  • a method for automatically generating emoticons including:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • a device for automatically generating emoticons including:
  • the acquisition module is used to acquire a face image
  • An extraction module for extracting facial micro-expressions from the facial image, and obtaining the facial expression tag of the facial image according to the facial micro-expressions;
  • the matching module is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset The emoticon package pictures in the emoticon package library all have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
  • An overlay module for extracting facial features in the face image, and overlaying the facial features to the location of the facial features of the emoticon package picture matched from the preset emoticon package library , Generate personalized emoticons.
  • a computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions to realize the above-mentioned emoticon automatic generation method.
  • a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, realizes the aforementioned emoticon package automatic generation method.
  • FIG. 1 is a schematic diagram of an application environment of a method for automatically generating emoticons in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for automatically generating emoticons in an embodiment of the present application
  • step S20 of the method for automatically generating emoticons in an embodiment of the present application
  • step S30 is a flowchart of step S30 of the method for automatically generating emoticons in an embodiment of the present application
  • step S40 of the method for automatically generating emoticons in an embodiment of the present application
  • FIG. 6 is a flowchart of step S407 of the method for automatically generating emoticons in an embodiment of the present application
  • FIG. 7 is a schematic block diagram of a device for automatically generating emoticons in an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of the extraction module of the emoticon automatic generation device in an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • the method for automatically generating emoticons provided by this application can be applied in the application environment as shown in Fig. 1, where the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a method for automatically generating emoticons is provided. Taking the method applied to the server in FIG. 1 as an example, the method includes the following steps S10-S40:
  • the face image refers to an image containing part or all of the facial features.
  • the face image can be taken by the user through a camera device and uploaded to the value server, or it can be pre-stored in the database, and the server can retrieve it from the database at any time as required. Retrieval.
  • the step S20 that is, the extraction of facial micro-expressions from the facial image, and obtaining the facial expression of the facial image according to the facial micro-expressions Labels, including:
  • the type of the action unit may include, but is not limited to, the internationally used action units (AU) and eye movements in Table 1 below.
  • the eye movements are specifically different movements and perspectives of the eye, such as the left eye, Look right, upward, downward, upper right, etc., and the action units corresponding to different eye movements and viewing angles may also include judging the magnitude of eye movements.
  • the database pre-stores the action unit types corresponding to various micro-expression types (such as crying, laughing, or angry), and each micro-expression type corresponds to a combination of multiple action unit types, such as ,
  • the micro expression type is laugh, this micro expression type corresponds to a combination of at least the following action unit types: mouth up (AU12 in Table 1), mouth up (AU12 in Table 1) + outer eyebrows up (in Table 1 AU2), the corners of the mouth are raised (AU12 in Table 1) + lip extension (AU20 in Table 1) + the upper and lower lips are separated (AU25 in Table 1), etc.; therefore, it is only necessary to remove all the words extracted in step S201
  • the action unit type is compared with the corresponding action unit type of each micro-expression type stored in the database to confirm the type of the micro-expression.
  • all the action unit types extracted in step S201 include all the action unit types corresponding to a micro-expression type stored in the database (that is, in All the action unit types extracted in the step S201 may also include other action units), that is, the micro-expression type can be regarded as the micro-expression type.
  • all the action unit types extracted in the step S201 can also correspond one-to-one with the action unit types and sequences of a micro-expression type stored in the database (no more or less Any action unit) is considered to be the micro-expression type of the monitored person.
  • S203 Acquire all the emoticon tags associated with the micro-expression type, and simultaneously acquire a characteristic action unit associated with each emoticon tag;
  • the database has pre-stored expression tags associated with each of the micro expression types, and each micro expression type corresponds to multiple expression tags.
  • the associated expression tags may include: Laugh, smile, smirk, wry smile, smirk, etc. Understandably, each emoticon tag associated with a micro-emoji type has at least one characteristic action unit corresponding to it.
  • all the action unit types extracted from the face image in step S201 include all the feature action units associated with an emoticon tag (that is, in step S201)
  • All the extracted action unit types may also include other action units other than the characteristic action unit corresponding to the emoticon tag, that is, the emoticon tag can be regarded as the emoticon tag of the face image.
  • the type of micro-expression is first confirmed according to the type of action unit extracted from the face image (the number of micro-expression types is much smaller than the number of emoticon tags), and then the type of action unit extracted from the face image is compared with the micro-expression The feature action unit of the type-associated emoticon tag is matched.
  • the action unit type extracted from the face image does not need to be compared with all the emoticon tags, only the feature action unit of the emoticon tag corresponding to a few types of micro-expression types is required.
  • the number of emoticons is huge, the amount of calculation is greatly reduced and the server load is reduced.
  • step S201 after extracting all the action unit types of the facial micro-expression from the facial image in step S201, all the facial expression tags and the corresponding expression tags can be directly obtained.
  • the expression label is recorded as the expression label of the face image.
  • the emoticon package picture associated with the emoticon tag in the preset emoticon package library it is first necessary to obtain the emoticon package picture associated with the emoticon tag in the preset emoticon package library according to the emoticon tag of the face image acquired in step S20 (a emoticon package picture may correspond to For one or more emoticon tags), at this time, the number of emoticon package pictures that may be obtained corresponding to the emoticon label is greater than one. At this time, one of the emoticon package pictures needs to be selected according to the needs, and the selected emoticon The package picture record is the emoticon package picture matched from a preset emoticon package library.
  • the step S30 that is, the matching emoticon package pictures from a preset emoticon package library according to the emoticon tags of the facial images, and determining the matched emoticons
  • the location of the facial features of the package picture including:
  • the face contour is also the edge contour of the face in the face image.
  • S302 Select all emoticon package pictures with the same emoticon tag and the face image from a preset emoticon package library
  • S303 Determine the location of the facial features and the contour of the location of the facial features of each selected emoticon package picture; the location of the facial features refers to all the objects with facial features in one emoticon package picture
  • the location of the five sense organs such as a person, an animal, an animation character, etc.
  • the location contour refers to the contour of the corresponding position of the five sense organs (such as the facial contour of a human face).
  • it may refer to selecting the facial contour and the facial image of the facial image from all the facial expression package pictures corresponding to the facial expression tag of the facial image in the preset facial expression package library.
  • the emoticon picture with the highest contour similarity is recorded as the emoticon picture matched from the preset emoticon library, so that it is convenient to extract the largest facial features from the face image later It is adapted to the extent to cover the facial features, replacing the image corresponding to the facial features in the emoticon package picture, and generating a new personalized emoticon package.
  • the matching an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determining the location of the facial features of the matched emoticon package picture includes:
  • All the emoticon pictures with the same emoticon label and the face image are selected from the preset emoticon package library; the emoticon package pictures in the preset emoticon package library may be legally obtained from a third party that specializes in making emoticon packages
  • the emoticon package, and the emoticon package pictures in the preset emoticon package library all need to have facial features (that is, the face, which may be part of the face or all).
  • the emoticon pack pictures that uniquely match the face images are determined; understandably, the screening rules can be selected randomly or refer to The selection is based on the frequency of use. For example, the emoticon picture with the highest personal use frequency of the user can be selected, that is, the more frequently the user uses the emoticon picture, the greater the probability that it will be selected.
  • the filtering rule may also be to first count the total usage times of all the emoticon package pictures corresponding to the emoticon tag in the preset emoticon package library by all users, and select the emoticon package with the highest total usage count.
  • the screening rule can also pass the total usage count through a preset conversion rule (a conversion rule contains the association relationship between the total usage count in different ranges and different popularity, and a total usage count can only Corresponding to a popularity) is converted to popularity.
  • a preset conversion rule contains the association relationship between the total usage count in different ranges and different popularity, and a total usage count can only Corresponding to a popularity
  • the emoticon package picture determined according to the screening rule is recorded as the emoticon package picture that uniquely matches the face image, and at the same time it is extracted from the emoticon package picture that uniquely matches the face image The location of the facial features.
  • the emoticon package picture that uniquely matches the face image can be confirmed according to the screening rule.
  • the screening rule can be confirmed according to the number of times the user has used it. In this way, the user can be selected according to user preferences.
  • the generated personalized emoticon package template in this way, the generated personalized emoticon package will be more in line with the user's usage habits and bring a better user experience.
  • the location of the facial features in the emoticon package picture needs to be determined, and then The facial features extracted from the face image are replaced with the original image content in the contour of the position of the facial features (referring to the contour of the corresponding position of the facial features), and then the facial features of the facial image will be replaced
  • the above-mentioned emoticon package pictures are synthesized into a new personalized emoticon package (for example, the main body of the emoticon package picture is a cute kitten. In this case, replace the cat’s face with the facial features of the face image to generate a personalized expression
  • the bag is a cute kitten with facial features of a face image).
  • the extracting facial features in the face image, and overlaying the facial features to match from the preset emoticon library includes:
  • the location contour of the facial features refers to the edge contour of the area occupied by the facial features in the emoticon package picture;
  • the overall placement angle of the facial features refers to the size, upright or inverted angle of the facial features ,
  • the overall placement angle can be determined based on one or more of the five senses, for example, the angle between the straight line and the horizontal line between the opposite corners of an eye is used to determine its inclination angle, and then the nose Or whether the mouth is under the eyes (similarly, it can also be determined by whether the mouth is under the nose, etc.) to determine whether it is upright or upside down (the nose or mouth under the eyes is upright, otherwise it is upside down);
  • the contour area of the facial features refers to the total area of the contour of the location.
  • S402 Extract all facial features within the contour of the face in the face image, and determine the positional relationship between the center points of the facial features and the linear distance between the center points of the facial features ;
  • the facial features include but are not limited to ears, eyebrows, eyes, nose, mouth, etc.
  • the positional relationship between the center points of the facial features refers to the distance, relative orientation, etc. between the center points of the facial features.
  • S403 Create a new canvas, where the contour of the canvas is consistent with the contour of the position of the facial features; preprocess the facial features according to a preset image processing method; that is, the contour of the canvas is consistent with the position of the facial features.
  • the location contours of the five sense organs can be completely overlapped.
  • the preset image processing methods include, but are not limited to, performing transparency adjustment and color toning processing on the facial features, so as to make the generated personalized emoticon package more natural and beautiful.
  • the center position of all the facial features and the center position of the canvas outline can be used as the counterpoint to place all the facial features into the canvas outline; and the composition between all the facial features
  • the overall placement angle is inconsistent with the overall placement angle, it needs to be adjusted to be consistent with the overall placement angle before placing all facial features into the canvas outline.
  • the above-mentioned same ratio needs to be based on "the ratio of the figure area enclosed by the outermost facial feature among the facial features after adjustment to the contour area of the facial features is within the preset ratio range" Conditions are selected (it can be arranged in advance and stored in the database according to the priority level, and the server automatically selects the same ratio from the database that meets the above conditions), if the ratio is within the preset ratio range, how many of the same ratios choose one, you can choose one randomly or according to the order of priority.
  • S406 Cover the canvas containing the facial features on the contour of the location of the facial features of the emoticon package picture matched from the preset emoticon package library; that is, due to The canvas outline of the canvas and the location outline of the facial features can completely overlap, so in this step, the canvas can directly replace the original image content in the outline of the facial features.
  • S407 Perform image synthesis processing on the emoticon package picture covered with the canvas outline to generate the personalized emoticon package.
  • the image synthesis processing includes, but is not limited to, merging the emoticon package pictures covered with the facial features into the same picture and performing unified exposure and toning processing to make it more natural.
  • step S407 generating the personalized emoticon package includes:
  • S4071 Receive a text adding instruction, and obtain the emoticon text entered by the user and the text box number selected by the user; wherein, the text adding instruction refers to the image synthesis process performed in step S407, if the user also wants to personalize
  • the emoticon text refers to what the user wants to configure for the personalized emoticon package Text
  • the text box number refers to the unique identifier of the text box that can be added to the personalized emoticon package, and each text box number corresponds to the style of a text box.
  • each text box number has a text box size that can be filled with emoticon text, and each text box corresponds to a default Text format, if the user does not modify the default text format, the emoticon text will be filled in the text box in the default text format.
  • S4073 Obtain the number of characters in the emoticon text, and adjust the character size in the default text format according to the number of characters and the size of the text box; that is, the number of characters in the emoticon text (ie, the character length ), the character size is automatically adjusted; understandably, other text format items other than the character size in the default text format can also be adjusted according to requirements.
  • S4074 Generate a text box corresponding to the text box number at a preset position in the emoticon package picture or a position selected by the user, and fill in emoticon text in the text box according to the adjusted default text format ; That is, after the user adjusts the default text format, the emoticon text will be filled into the text box in the adjusted default text format.
  • S4075 After assembling the emoticon package picture and the text box, generate the personalized emoticon package.
  • the assembling process refers to merging the text box and the emoticon package picture after image synthesis processing into the same personalized emoticon package.
  • the above-mentioned embodiment also supports user-defined input of emoticon text, and automatically adjusts the character size and so on by judging the number of characters of emoticon text (that is, the character length), and fills in the emoticon characters.
  • the text box and the emoticon package picture are automatically assembled into a personalized emoticon package. Understandably, it is also possible to add prop effects to the personalized emoticon package, such as adding props with effects such as hearts, hats, and stars.
  • an apparatus for automatically generating emoticons corresponds to the method for automatically generating emoticons in the above-mentioned embodiment one-to-one.
  • the device for automatically generating emoticons includes:
  • the obtaining module 11 is used to obtain a face image
  • the extraction module 12 is configured to extract facial micro-expressions from the facial images, and obtain the facial expression tags of the facial images according to the facial micro-expressions;
  • the matching module 13 is configured to match an emoticon package picture from a preset emoticon package library according to the emoticon tag of the face image, and determine the location of the facial features of the matched emoticon package picture; wherein, the preset It is assumed that the emoticon package pictures in the emoticon package library have at least one facial features, and each emoticon package picture is associated with at least one emoticon label;
  • the covering module 14 is used to extract the facial features in the face image, and cover the facial features to the facial features of the emoticon package picture matched from the preset emoticon package library. Location, generate personalized emoticons.
  • the extraction module 12 includes:
  • the extraction unit 121 is configured to extract all the action unit types of the facial micro-expression from the facial image
  • the confirming unit 122 is configured to confirm the micro-expression type of the face image based on all the action unit types extracted from the face image;
  • the acquiring unit 123 is configured to acquire all the emoticon tags associated with the micro-expression type, and at the same time acquire the characteristic action unit associated with each emoticon tag;
  • the matching unit 124 is configured to match all the action unit types extracted from the face image with the characteristic action units associated with each expression tag, and extract all the action units from the face image.
  • the action unit type includes all the characteristic action units associated with the expression tag
  • the expression tag is recorded as the expression tag of the face image.
  • Each module in the above-mentioned emoticon package automatic generating device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. .
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and capable of running on the processor, and the processor implements at least the following steps when executing the computer-readable instructions:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • a computer-readable storage medium is provided.
  • the computer-readable storage medium is a volatile storage medium or a non-volatile storage medium, and computer-readable instructions are stored thereon. At least the following steps are implemented when executed by the processor:
  • the emoticon package picture is matched from the preset emoticon package library, and the location of the facial features of the matched emoticon package picture is determined; wherein, Each of the emoticon package pictures has at least one facial features, and each of the emoticon package pictures is associated with at least one emoticon label;
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un appareil de génération automatique de mèmes, un dispositif informatique et un support de stockage. Le procédé consiste à : extraire une micro-expression faciale à partir d'une image faciale et acquérir une étiquette d'expression de l'image faciale en fonction de la micro-expression faciale (S20) ; mettre en correspondance, en fonction de l'étiquette d'expression de l'image faciale, une image de mème d'une bibliothèque de mèmes prédéfinis et déterminer les positions des cinq organes sensoriels dans l'image de mème correspondante (S30) ; et extraire les caractéristiques faciales dans l'image faciale et superposer les caractéristiques faciales avec les positions des cinq organes sensoriels dans l'image de mème qui est mise en correspondance à partir de la bibliothèque de mèmes prédéfinis, de sorte à générer un mème personnalisé (S40). Le procédé est facile à mettre en œuvre et, comme l'image faciale dans le mème personnalisé généré est cohérente avec l'image de mème d'origine, l'effet et la cohérence de la fusion des caractéristiques faciales de l'image faciale avec l'image de mème d'origine sont supérieurs, ce qui permet d'améliorer l'expérience de l'utilisateur et d'améliorer également le degré d'activité de l'utilisateur et son degré d'engagement.
PCT/CN2020/085573 2019-07-05 2020-04-20 Procédé et appareil de génération automatique de mèmes, dispositif informatique et support de stockage WO2021004114A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910602401.7A CN110458916A (zh) 2019-07-05 2019-07-05 表情包自动生成方法、装置、计算机设备及存储介质
CN201910602401.7 2019-07-05

Publications (1)

Publication Number Publication Date
WO2021004114A1 true WO2021004114A1 (fr) 2021-01-14

Family

ID=68482133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085573 WO2021004114A1 (fr) 2019-07-05 2020-04-20 Procédé et appareil de génération automatique de mèmes, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN110458916A (fr)
WO (1) WO2021004114A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (zh) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 表情包生成方法及装置、存储介质
CN113177994A (zh) * 2021-03-25 2021-07-27 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113485596A (zh) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN117150063A (zh) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质
CN110889379B (zh) * 2019-11-29 2024-02-20 深圳先进技术研究院 表情包生成方法、装置及终端设备
CN111145283A (zh) * 2019-12-13 2020-05-12 北京智慧章鱼科技有限公司 一种用于输入法的表情个性化生成方法及装置
CN111046814A (zh) * 2019-12-18 2020-04-21 维沃移动通信有限公司 图像处理方法及电子设备
CN111368127B (zh) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN112102157A (zh) * 2020-09-09 2020-12-18 咪咕文化科技有限公司 视频换脸方法、电子设备和计算机可读存储介质
CN112270733A (zh) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 Ar表情包的生成方法、装置、电子设备及存储介质
CN112214632B (zh) * 2020-11-03 2023-11-17 虎博网络技术(上海)有限公司 文案检索方法、装置及电子设备
CN114816599B (zh) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质
CN113727024B (zh) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 多媒体信息生成方法、装置、电子设备和存储介质
CN117974853A (zh) * 2024-03-29 2024-05-03 成都工业学院 同源微表情图像自适应切换生成方法、系统、终端及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (zh) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 一种基于人脸识别的表情输入方法和装置
CN107219917A (zh) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 表情符号生成方法及装置、计算机设备与可读介质
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN108197206A (zh) * 2017-12-28 2018-06-22 努比亚技术有限公司 表情包生成方法、移动终端及计算机可读存储介质
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573527B (zh) * 2018-04-18 2020-02-18 腾讯科技(深圳)有限公司 一种表情图片生成方法及其设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (zh) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 一种基于人脸识别的表情输入方法和装置
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN107219917A (zh) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 表情符号生成方法及装置、计算机设备与可读介质
CN108197206A (zh) * 2017-12-28 2018-06-22 努比亚技术有限公司 表情包生成方法、移动终端及计算机可读存储介质
CN110458916A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 表情包自动生成方法、装置、计算机设备及存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905791A (zh) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 表情包生成方法及装置、存储介质
US11922725B2 (en) 2021-02-20 2024-03-05 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and device for generating emoticon, and storage medium
CN113177994A (zh) * 2021-03-25 2021-07-27 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113177994B (zh) * 2021-03-25 2022-09-06 云南大学 基于图文语义的网络社交表情包合成方法、电子设备和计算机可读存储介质
CN113485596A (zh) * 2021-07-07 2021-10-08 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN113485596B (zh) * 2021-07-07 2023-12-22 游艺星际(北京)科技有限公司 虚拟模型的处理方法、装置、电子设备及存储介质
CN117150063A (zh) * 2023-10-26 2023-12-01 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及系统
CN117150063B (zh) * 2023-10-26 2024-02-06 深圳慢云智能科技有限公司 一种基于场景识别的图像生成方法及系统

Also Published As

Publication number Publication date
CN110458916A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2021004114A1 (fr) Procédé et appareil de génération automatique de mèmes, dispositif informatique et support de stockage
US11455729B2 (en) Image processing method and apparatus, and storage medium
US11861936B2 (en) Face reenactment
US20210144338A1 (en) Video conferencing method
WO2016177290A1 (fr) Procédé et système de production et d'utilisation d'expression pour une image virtuelle créée par combinaison libre
JP6972043B2 (ja) 情報処理装置、情報処理方法およびプログラム
US20210374839A1 (en) Generating augmented reality content based on third-party content
WO2020211347A1 (fr) Procédé et appareil de modification d'image par reconnaissance faciale, et dispositif informatique
US11776187B2 (en) Digital makeup artist
US11961169B2 (en) Digital makeup artist
WO2022257766A1 (fr) Procédé et appareil de traitement d'image, dispositif et support
WO2015184903A1 (fr) Procédé et dispositif de traitement des images
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system
KR101757184B1 (ko) 감정표현 콘텐츠를 자동으로 생성하고 분류하는 시스템 및 그 방법
CN115222899B (zh) 虚拟数字人生成方法、系统、计算机设备及存储介质
WO2023138345A1 (fr) Procédé et système de génération d'image virtuelle
CN114841851A (zh) 图像生成方法、装置、电子设备及存储介质
CN114443182A (zh) 一种界面切换方法、存储介质及终端设备
CN117391805A (zh) 试穿图生成方法、生成系统、电子设备及存储介质
CN115554701A (zh) 虚拟角色的控制方法、装置、计算机设备和存储介质
KR20230118191A (ko) 디지털 메이크업 아티스트
CN117351121A (zh) 数字人编辑控制方法、装置、电子设备和存储介质
CN117830527A (zh) 一种数字人可定制的人像实现方法、系统及存储介质
CN113031768A (zh) 客服服务方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20836230

Country of ref document: EP

Kind code of ref document: A1