CN111541950A - Expression generation method and device, electronic equipment and storage medium - Google Patents

Expression generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111541950A
CN111541950A CN202010378163.9A CN202010378163A CN111541950A CN 111541950 A CN111541950 A CN 111541950A CN 202010378163 A CN202010378163 A CN 202010378163A CN 111541950 A CN111541950 A CN 111541950A
Authority
CN
China
Prior art keywords
expression
image
presenting
template set
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010378163.9A
Other languages
Chinese (zh)
Other versions
CN111541950B (en
Inventor
刘佳卉
陈柯辰
钟媛
程功凡
佘渡离
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010378163.9A priority Critical patent/CN111541950B/en
Publication of CN111541950A publication Critical patent/CN111541950A/en
Application granted granted Critical
Publication of CN111541950B publication Critical patent/CN111541950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method and a device for generating an expression, electronic equipment and a storage medium; the method comprises the following steps: responding to an expression editing instruction, presenting an image acquisition function inlet, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates; acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function entry; responding to an expression generation instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set; the first expression is obtained by fusing an image of the first object and expression templates in the first expression template set; through the method and the device, the expressions fused with the user characteristics can be generated in batches, and the interestingness of the user chatting the fighting picture is enhanced.

Description

Expression generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet technologies and artificial intelligence technologies, and in particular, to a method and an apparatus for generating an expression, an electronic device, and a storage medium.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
With the continuous development of the artificial intelligence technology, the artificial intelligence technology is increasingly applied to social clients such as instant messaging and the like, users often use various expressions when chatting online through the instant messaging client, besides some lovely expressions, the fighting chart emotion with the fun attribute is widely used by people, but most of the expressions are issued by designers for downloading and using by the users, so that the expressions sent among the users are easy to be the same, the effect of fighting the chart cannot be achieved, and the individuation is poor. In some techniques for synthesizing a fighting chart situation by a user, the fighting chart situation is synthesized according to a shot picture and a selected expression pendant, only one expression can be made by shooting the picture at one time, the making efficiency is low, the requirement of the user on a fast-paced fighting chart is difficult to meet, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an expression generation method and device, electronic equipment and a storage medium, which can generate expressions fused with user characteristics in batch and enhance the interestingness of user chatting pictures.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for generating an expression, which comprises the following steps:
responding to an expression editing instruction, presenting an image acquisition function inlet, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function entry;
responding to an expression generation instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set;
and the first expression is obtained by fusing the image of the first object and the expression templates in the first expression template set.
In the foregoing solution, the presenting an image capturing interface for capturing an image of a first object includes:
presenting an image acquisition interface and presenting at least one of: shooting prompt information and an expression frame for displaying an expression preview effect;
wherein the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In the foregoing solution, after presenting the first expression corresponding to each expression template in the first expression template set, the method further includes:
responding to selection operation aiming at a second expression template set, and presenting each expression template contained in the second expression template set;
responding to an expression generation instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
and the third expression is obtained based on the fusion of the image of the first object and the expression template in the second expression template set.
An embodiment of the present application further provides an expression generation apparatus, including:
the first presentation module is used for responding to an expression editing instruction, presenting an image acquisition function entrance and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
the acquisition module is used for responding to the triggering operation aiming at the image acquisition function entrance, and acquiring and presenting the image of the first object;
the second presentation module is used for responding to an expression generation instruction aiming at the first expression template set and presenting first expressions corresponding to the expression templates in the first expression template set;
and the first expression is obtained by fusing the image of the first object and the expression templates in the first expression template set.
In the above scheme, the apparatus further comprises:
the first receiving module is used for presenting a conversation function bar in a conversation interface and presenting an expression editing function item in the conversation function bar;
and responding to the triggering operation aiming at the expression editing function item, and receiving the expression editing instruction.
In the above scheme, the apparatus further comprises:
the second receiving module is used for presenting a conversation function bar in a conversation interface and presenting an expression adding function item in the conversation function bar;
responding to the triggering operation of the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation aiming at the expression editing entry, and receiving the expression editing instruction.
In the foregoing solution, the obtaining module is further configured to, when the image obtaining function entry is an image capturing function entry, respond to a trigger operation for the image obtaining function entry, present an image capturing interface for capturing an image of the first object, and present an image capturing interface for capturing an image of the first object
Presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
In the foregoing solution, the obtaining module is further configured to present an image acquisition interface, and present at least one of the following in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect;
wherein the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In the foregoing solution, the obtaining module is further configured to, when the image obtaining function entry is an image capturing function entry, respond to a trigger operation for the image obtaining function entry, present the image obtaining interface including the facial expression frame, and present the image obtaining interface including the facial expression frame
Presenting a shooting key for image acquisition in the image acquisition interface;
and acquiring an image corresponding to the face of the first object based on the facial expression frame in response to the triggering operation of the shooting key.
In the foregoing solution, the obtaining module is further configured to, when the image obtaining function entry is an image selection entry, present an image selection interface for selecting an image of a first object from the atlas in response to a trigger operation for the image obtaining function entry,
at least two images are presented in the image selection interface;
and in response to an image selection operation triggered based on the image selection interface, presenting an image selected by the image selection operation as an image of the first object.
In the above scheme, the second presenting module is further configured to identify an image of the first object, and obtain a face region of the first object;
fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression fused with the facial features of the first object and corresponding to each expression template;
presenting each of the obtained first expressions.
In the above solution, the second presenting module is further configured to identify a region of five sense organs in the face region of the first object;
performing edge smoothing on the face region, and
performing contrast enhancement processing on the five sense organ regions in the face region to obtain a processed face region;
and processing the facial area of each expression template in the first expression template set according to the processed facial area to obtain a first expression fused with the facial features of the first object and corresponding to each expression template.
In the foregoing scheme, the second presentation module is further configured to receive a template set selection operation, and use an expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generation instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In the above scheme, the apparatus further comprises:
a third receiving module, configured to receive a template set selection operation, use an expression template set selected by the template set selection operation as the first expression template set, and control a state of the first expression template set to be a selected state;
and the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In the above scheme, the apparatus further comprises:
a fourth receiving module, configured to receive a focusing operation for an image of the first object;
and responding to an expression generation instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In the above scheme, the obtaining module is further configured to present a first thumbnail of the image of the first object;
correspondingly, the second presentation module is further configured to present a first expression corresponding to each expression template in the first expression template set when the first thumbnail is focused.
In the above scheme, the apparatus further comprises:
the third presentation module is used for presenting a second thumbnail of the image of the second object, and the second thumbnail is obtained based on the image acquisition function entrance;
switching the first thumbnail in focus to the second thumbnail in focus in response to a focus instruction for the second thumbnail;
responding to an expression generation instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set;
and the second expression is obtained by fusing the image of the second object and the expression templates in the first expression template set.
In the above scheme, the apparatus further comprises:
the fourth presentation module is used for responding to the selection operation aiming at the second expression template set and presenting each expression template contained in the second expression template set;
responding to an expression generation instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
and the third expression is obtained based on the fusion of the image of the first object and the expression template in the second expression template set.
An embodiment of the present application further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the expression generation method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for generating the expression provided by the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
responding to an expression editing instruction triggered by a user, acquiring an image of a first object through an image acquisition function inlet, fusing the image of the first object with expression templates in a presented expression template set, wherein the expression template set comprises a plurality of expression templates, and when an expression generation instruction is received, generating expressions corresponding to the expression templates in the expression template set at one time; therefore, expressions fused with user characteristics can be generated in batches, interestingness of the user in chatting the fighting picture is increased, and stickiness of the user to the product is improved.
Drawings
Fig. 1 is a flowchart illustrating a method of generating an expression provided in the related art;
fig. 2 is a schematic view of an implementation scenario of a method for generating an expression provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 5A is a first flowchart of triggering an expression edit instruction according to an embodiment of the present application;
fig. 5B is a flowchart illustrating a triggering process of an expression edit instruction according to an embodiment of the present application;
fig. 6A is a flowchart of acquiring a first image of a first object according to an embodiment of the present disclosure;
fig. 6B is a flowchart of acquiring an image of a first object according to an embodiment of the present application;
fig. 6C is a flowchart three of acquiring an image of a first object according to an embodiment of the present application;
FIG. 7A is a schematic diagram of a selection of an expression template set provided in an embodiment of the present application;
FIG. 7B is a diagram illustrating an expression of a first object according to an embodiment of the present application;
fig. 8A is a schematic diagram of a triggering process of an expression generation instruction according to an embodiment of the present application;
fig. 8B is a schematic diagram of a triggering flow of an expression generation instruction according to an embodiment of the present application;
fig. 8C is a schematic diagram of a trigger flow of an expression generation instruction provided in the embodiment of the present application;
fig. 9 is a schematic flowchart of fusion generating a first expression according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a presentation of an image of a first object via a thumbnail provided by an embodiment of the application;
fig. 11 is a schematic diagram illustrating switching between a first expression and a second expression according to an embodiment of the present application;
fig. 12 is a schematic flowchart illustrating a process of generating each third expression corresponding to the second expression template set according to an embodiment of the present application;
fig. 13 is a schematic flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 14 is a schematic flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 15 is a flowchart illustrating a method for generating an expression according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an expression generation apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) Self-timer expression: using expressions made from a photo or video of the user's self-portrait, with a picture of the user's individual face.
3) Self-timer pendant: when self-timer is taking, the designed material pattern is superposed on the taken picture, so that self-timer can be more beautiful or more interesting
4) Bucket diagram: when group chatting or private chatting, users send out a funny expression to entertain each other, and are generally commonly used in social software.
5) The chart situation of the fighting: the mood of a person is expressed by simple hand-drawing or expression formed by exaggerated head portraits, and the face is usually synthesized by blackening and whitening the face of a famous person (such as a curator).
6) Dynamic expression: animated expressions made from multiple frames of pictures.
7) The face recognition technology is based on the face characteristics of people, firstly, whether a face exists in an input face image or video stream is judged, and if the face exists, the position and the size of each face and the position information of each main facial organ are further given. The Computer Vision technology (CV) is an important branch of applying Computer Vision technology to biological feature recognition, and is a science for researching how to enable a machine to see. Computer vision techniques typically include image processing, image recognition, video processing, video content/behavior recognition, and the like.
When generating an expression carrying a user image in the related art, referring to fig. 1, fig. 1 is a flowchart illustrating a method for generating an expression provided in the related art. The terminal shoots a static picture containing the face of the user through the camera function, and then generates a static expression fused with the face of the user according to the shot static picture and a certain expression pendant selected by the user. Therefore, only one expression can be made in one shooting, the making efficiency is low, and the requirement of a user on a fast-paced fighting picture is difficult to meet; meanwhile, only static expressions can be shot, so that the visual perception is monotonous, and the user experience is not good.
In view of the above, embodiments of the present application provide a method and an apparatus for generating an expression, an electronic device, and a storage medium, so as to solve at least the above problems, which will be described below.
Based on the above explanations of terms and terms related in the embodiments of the present application, an implementation scenario of the expression generation method provided in the embodiments of the present application is first described below, referring to fig. 2, fig. 2 is a schematic diagram of an implementation scenario of the expression generation method provided in the embodiments of the present application, in order to support an exemplary application, an application client, such as an instant messaging client, is disposed on a terminal (including a terminal 200-1 and a terminal 200-2); the terminals are connected to the server 100 through a network 30, and the network 30 may be a wide area network or a local area network, or a combination of both, and the data transmission is realized by using wireless or wired links.
The terminal (such as the terminal 200-1) is used for responding to the expression editing instruction, presenting an image acquisition function entrance and presenting at least one expression template set; acquiring and presenting an image of a first object in response to a trigger operation for an image acquisition function entry; responding to an expression generation instruction aiming at the first expression template set, and sending the expression generation instruction aiming at the first expression template set to a server;
a server 100, configured to receive an expression generation instruction for a first expression template set; responding to the expression generation instruction, fusing to obtain each first expression based on the image of the first object and each expression template in the first expression template set, and returning to the terminal;
and the terminal (such as the terminal 200-1) is used for receiving and presenting the first expression of each expression template in the corresponding first expression template set.
In practical applications, the server 100 may be a server configured independently to support various services, or may be a server cluster; the terminal (e.g., terminal 200-1) may be any type of user terminal such as a smartphone, tablet, laptop, etc., and may also be a wearable computing device, a Personal Digital Assistant (PDA), a desktop computer, a cellular phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these or other data processing devices.
The hardware structure of the electronic device of the expression generation method provided in the embodiment of the present application is described in detail below, where the electronic device includes, but is not limited to, a server or a terminal. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device 300 shown in fig. 3 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in electronic device 300 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 3.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310.
The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 350 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 353 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 331 (e.g., a display screen, speakers, etc.) associated with the user interface 330;
an input processing module 354 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the expression generation device provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates an expression generation device 355 stored in the memory 350, which may be software in the form of programs and plug-ins, and includes the following software modules: a first presenting module 3551, an obtaining module 3552 and a second presenting module 3553, which are logical and thus can be arbitrarily combined or further separated according to the implemented functions, and the functions of the respective modules will be described below.
In other embodiments, the expression generating Device provided in this embodiment may be implemented by a combination of hardware and software, and as an example, the expression generating Device provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the expression generating method provided in this embodiment, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the implementation scenario of the expression generation method and the electronic device in the embodiment of the present application, the expression generation method provided in the embodiment of the present application is described below. Referring to fig. 4, fig. 4 is a schematic flowchart of a method for generating an expression provided in an embodiment of the present application; in some embodiments, the expression generation method may be implemented by a server or a terminal alone, or implemented by a server and a terminal in a cooperative manner, taking the terminal as an example, the expression generation method provided in the embodiments of the present application includes:
step 401: and the terminal responds to the expression editing instruction, presents an image acquisition function entrance and presents at least one expression template set.
Here, the terminal may be provided with an instant messaging client, and by operating the instant messaging client, a conversation window for a user to perform a conversation is presented, and the user may perform instant messaging, such as sending text messages, emoticons, and the like, through the conversation window. For the expression, the terminal can present the expression finished by other people, and also present the expression finished by the personal edition of the user, and the user can realize the sending of the expression message by clicking the expression presented by the terminal.
In practical application, if a user has a requirement for editing an expression, the expression can be edited and generated by triggering an expression editing instruction. In the embodiment of the application, a method for generating an expression based on a user image is provided, so that interestingness of conversation communication performed by a user by using the expression is improved. After receiving an expression editing instruction triggered by a user, the terminal responds to the expression editing instruction, presents an image acquisition function inlet for acquiring a user image, and also presents at least one expression template set for the user to generate an expression, wherein the expression template set comprises at least two expression templates, so that the user can generate the expression fused with the user image in batch. The expression template may include a dynamic expression template, and may also include a static expression template.
In some embodiments, the terminal may trigger the expression editing instruction to implement the editing of the expression by the user as follows: presenting a conversation function bar in a conversation interface, and presenting an expression editing function item in the conversation function bar; and receiving an expression editing instruction in response to the triggering operation aiming at the expression editing function item.
In practical application, the terminal presents a conversation function bar in a conversation interface for a user to input conversation messages, perform voice conversation, send picture messages and the like. In the embodiment of the application, an expression editing function item is also presented in the conversation ribbon, and when a user wants to edit and generate an expression that the user wants to personally want, the expression editing function item can be triggered by clicking and other operations. And the terminal responds to the triggering operation aiming at the expression editing function item to receive an expression editing instruction triggered by a user, so that an interface containing an image acquisition inlet is presented.
Exemplarily, referring to fig. 5A, fig. 5A is a first flowchart of triggering an emoticon editing instruction provided in an embodiment of the present application. Here, the user clicks the expression editing function item "GIF" icon presented in the session function bar to trigger an expression editing instruction; the terminal responds to the click operation of the user on the 'GIF' icon button, receives an expression editing instruction, and accordingly presents an interface for the user to perform expression editing, namely an interface containing an image acquisition inlet, and the image acquisition interface, such as the image shooting interface shown in fig. 5A, can be accessed through the image acquisition inlet.
In some embodiments, the terminal may further trigger an expression editing instruction to implement the editing of the expression by the user as follows: presenting a conversation function bar in a conversation interface, and presenting an expression adding function item in the conversation function bar; responding to the triggering operation of the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area; and receiving an expression editing instruction in response to the triggering operation of the expression editing entry.
In practical application, the terminal can also present an expression adding function item in the conversation function bar, and present an expression selection area containing various types of expressions in response to the triggering operation aiming at the expression adding function item, wherein the expressions in the expression selection area can be used for a user to directly send expression messages; and meanwhile, an expression editing entry is also presented in the expression selection area, and when the terminal receives a trigger operation aiming at the expression editing entry, the terminal responds to the trigger operation to receive an expression editing instruction triggered by a user, so that an interface containing an image acquisition entry is presented.
Exemplarily, referring to fig. 5B, fig. 5B is a flowchart of triggering a facial expression editing instruction according to an embodiment of the present application. Here, the terminal presents an expression addition function item "heat map" button in the session function bar, and when a click operation of the user on the "heat map" button is received, in response to the click operation, presents an expression selection area containing various types of expressions, such as an "immunity + 1" expression, a "hard hand washing, mask wearing" expression, and the like, and presents an expression edit entry "DIY fig" button in the expression selection area. The terminal responds to the click operation of the user on the "DIY bucket image" button, receives an expression editing instruction, and presents an interface for the user to perform expression editing, namely an interface containing an image acquisition inlet, and the image acquisition interface, such as the image shooting interface shown in FIG. 5B, can be accessed through the image acquisition inlet.
Step 402: and acquiring and presenting the image of the first object in response to the triggering operation aiming at the image acquisition function entrance.
After the terminal responds to the expression editing instruction triggered by the user and presents an interface containing an image acquisition entry, the user can trigger the presented image acquisition function entry through clicking and other operations so as to acquire the image of the first object.
In some embodiments, the terminal may acquire the image of the first object by: when the image acquisition function inlet is an image acquisition function inlet, responding to the trigger operation aiming at the image acquisition function inlet, presenting an image acquisition interface for acquiring the image of the first object, and presenting a shooting key for image acquisition in the image acquisition interface; and acquiring and presenting an image of the first object in response to a trigger operation for the shooting key.
Further, in some embodiments, at least one of the following may be presented in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect; wherein the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
Here, the image acquisition function entry is an image capture function entry, and the terminal presents an image capture interface including a shooting key in response to a trigger operation for the image capture function entry. Shooting prompt information can be presented in the image acquisition interface and used for prompting a user to adjust a shooting posture, a shooting angle, a shooting position and the like; simultaneously the image acquisition interface still presents the expression frame that is used for showing expression preview effect, for example the auto heterodyne pendant of round face cat ear. And the terminal responds to the triggering operation of the user for the shooting key, and acquires and presents the image of the first object.
Exemplarily, referring to fig. 6A, fig. 6A is a first flowchart for acquiring an image of a first object provided in an embodiment of the present application. Here, the image capture interface presents a face function box prompting the user to adjust the shooting position, and also presents text prompts such as "please face the lens in front of the face", "find a position with better light", and the like. Simultaneously at user's shooting in-process, when catching user's face, the expression frame that is used for showing expression preview effect is still presented in the image acquisition interface, for example round face cat ear expression pendant. The terminal responds to the triggering operation of a user for the shooting key, when the image of the user is collected, the facial area and the facial region of the face are identified in real time, the expression frame is added to the corresponding facial area of the user, and the display of the preview effect of the expression to be generated is achieved. Here, a text prompt or the like that prompts the user to "make a funny expression" may also be presented.
In some embodiments, the terminal may also acquire the image of the first object by: when the image acquisition function inlet is an image acquisition function inlet, responding to the trigger operation aiming at the image acquisition function inlet, presenting an image acquisition interface containing a facial expression frame, and presenting a shooting key for image acquisition in the image acquisition interface; in response to a trigger operation for a photographing key, an image corresponding to a face of a first subject is acquired based on the facial expression frame.
And when the image acquisition function inlet is the image acquisition function inlet, the terminal responds to the trigger operation aiming at the image acquisition function inlet and presents an image acquisition interface containing a shooting key. Facial expression frames can also be presented in the image acquisition interface. When the terminal identifies the face of the user, in the acquisition process of the first object image, acquiring the image of the face of the first object carrying the facial expression frame based on the facial expression frame.
In some embodiments, the terminal may also acquire the image of the first object by: when the image acquisition function entry is an image selection entry, presenting an image selection interface for selecting an image of a first object from the atlas in response to a trigger operation for the image acquisition function entry, and presenting at least two images in the image selection interface; and presenting the image selected by the image selection operation as the image of the first object in response to the image selection operation triggered based on the image selection interface.
In practical applications, the image obtaining function portal may also be an image selection portal, through which a user may enter an image selection interface for the user to select a first object image from an album, which may be a collection of various images stored in the terminal, such as an album, a WeChat picture, and the like. The terminal responds to the triggering operation of a user for the image acquisition function entrance and presents an image selection interface; when an image selection operation triggered by a user based on an image selection interface is received, the image selected by the image selection operation is taken as the image of the first object and presented.
Exemplarily, referring to fig. 6B, fig. 6B is a second flowchart of acquiring an image of a first object provided in an embodiment of the present application. Here, the image acquisition function entry presented by "+" is an image selection entry, and the user can make image selection of the first object by clicking the image selection entry. The terminal responds to the triggering operation of the image selection entrance, presents at least two images under a target path (such as WeChat, an album and the like), and responds to the image selection operation triggered by the user on the image selection interface, and takes the image 1 selected by the user as the image of the first object.
In practical application, when a user acquires a first object image through the image acquisition function entrance, the terminal can respond to the click operation of the user on the image acquisition function entrance, and directly presents an interface containing the image selection entrance and the image acquisition function entrance so that the user can select an optimal image acquisition mode according to personal requirements. Referring to fig. 6C, fig. 6C is a third flowchart of an image obtaining process of the first object provided in the embodiment of the present application, where, in response to a click operation of a user on an image obtaining function entry, the terminal presents an interface including an image selecting entry "select from an album" and an image acquiring function entry "take a photo", and the user may trigger an image obtaining instruction by clicking a function entry of a required image obtaining manner; if the terminal receives an image acquisition instruction triggered by 'taking a picture' through the image acquisition function inlet, presenting an image acquisition interface for acquiring an image of the first object; and if the terminal receives an image acquisition instruction triggered by 'selecting from the album' through the image selection inlet, presenting an image acquisition interface for selecting the image of the first object from the atlas. And then the images of the first object are obtained through corresponding image obtaining modes respectively.
Step 403: and responding to the expression generation instruction aiming at the first expression template set, and presenting the first expression of each expression template in the corresponding first expression template set.
Here, the first expression is obtained by fusing an image of the first object and an expression template in the first expression template set. In some embodiments, the user may perform fusion generation of expressions by selecting a set of expression templates that the user needs. Therefore, after the terminal presents at least one expression template set, the terminal can receive template set selection operation, take the expression template set selected by the template set selection operation as a first expression template set, and control the presented state of the first expression template set to be a selected state; and the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In practical application, the terminal receives a template set selection operation executed by a user, and takes an expression template set selected by the template set selection operation as a first expression template set so as to generate a first expression corresponding to each expression template in the first expression template set and corresponding to a first object based on an image of the first object.
Exemplarily, referring to fig. 7A, fig. 7A is a schematic diagram of selecting an expression template set provided in an embodiment of the present application. Here, the terminal presents an expression template set including "hot", "fun", "angry", and the like, the user may select a corresponding expression template set according to personal needs, such as the "hot" expression template set, and the terminal presents each expression template in the "hot" expression template set in response to a selection operation of the user for the "hot" expression template set, and controls a state of the "hot" expression template set to be a selected state. When the 'hot' expression template set is in the selected state, the terminal can fuse the image of the first object with each expression template in the 'hot' expression template set according to the acquired image of the first object to obtain each first expression.
In the embodiment of the present application, when generating an expression into which facial features of a first object are fused, a face area of an expression template is uniformly replaced with a face area of the first object, that is, when a face area of the first object is in a "single-eye closed" state, a face area of each expression obtained by fusion is also in a "single-eye closed" state. Specifically, referring to fig. 7B, fig. 7B is a schematic diagram of an expression corresponding to a first object in an embodiment of the present application, where a state that an image of the first object acquired by a terminal is a face region is "single-eye closed", and at this time, states of face regions of respective expressions generated based on the image of the first object are both "single-eye closed".
In some embodiments, the terminal may trigger the expression generation instruction to present the first expression by: receiving a template set selection operation, and taking an expression template set selected by the template set selection operation as a first expression template set; and responding to an expression generation instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
Here, after entering the image capturing interface, the user may first capture an image of the first object through the image capturing function portal, and then the user may select a desired expression template set according to the need. After receiving a template set selection operation triggered by a user, a terminal determines an expression template set selected by the user as a first expression template set; and simultaneously responding to an expression generation instruction triggered by the template set selection operation, and directly generating and presenting the expressions corresponding to the first object and corresponding to each expression template in the first expression template set.
Exemplarily, referring to fig. 8A, fig. 8A is a schematic diagram of a triggering flow of an expression generation instruction provided in the embodiment of the present application. Here, the user first acquires an image of the first object through the image acquisition function portal; when the terminal receives a template set selection operation, taking a 'hot' expression template set selected by the template set selection operation as a first expression template set; and responding to an expression generation instruction triggered by the template set selection operation, and generating and presenting the expressions corresponding to the first object and corresponding to each expression template in the 'hot' expression template set.
In some embodiments, the terminal may further trigger the expression generation instruction to present the first expression by: receiving a focusing operation for an image of a first object; and responding to an expression generation instruction triggered by focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
Here, after entering the image acquisition interface, the user may first select a desired first expression template set, and after the user acquires the image of the first object through the image acquisition function entry, the terminal receives a focusing operation, such as a selection operation, on the image of the first object, and directly generates and presents an expression corresponding to each expression template in the first expression template set, in response to an expression generation instruction triggered by the focusing operation.
Exemplarily, referring to fig. 8B, fig. 8B is a schematic diagram of a triggering flow of an expression generation instruction provided in the embodiment of the present application. Here, the user first selects a "hot" expression template set as a first expression template set; after a user acquires an image of a first object through an image acquisition function entrance, presenting the image of the first object in a thumbnail mode; when the terminal receives an expression generation instruction triggered by a focusing operation executed by a user aiming at an image of a first object (namely a thumbnail of the image of the first object), the expression corresponding to the first object and corresponding to each expression template in the 'hot' expression template set is generated and presented.
In some embodiments, the terminal may further trigger the expression generation instruction to present the first expression by: receiving a confirmation operation for an image of a first object; and responding to an expression generation instruction triggered by the confirmation operation, and presenting the first expressions of the expression templates in the corresponding first expression template set.
Here, the user first selects a desired first expression template set, and when acquiring an image of the first object through the image acquisition interface, for example, capturing an image or selecting an image in the image set. When the terminal receives an expression generation instruction triggered by the confirmation operation, the expression generation instruction corresponding to each expression template in the first expression template set and corresponding to the first object is directly generated and displayed.
Exemplarily, referring to fig. 8C, fig. 8C is a schematic diagram of a trigger flow of an expression generation instruction provided in the embodiment of the present application. Here, the user first selects a "hot" expression template set as a first expression template set; after the user finishes shooting the image of the first object through the image acquisition interface or finishes selecting the image of the first object based on the image selection interface, clicking a 'confirmation' button to perform confirmation operation on the image of the first object; the terminal receives the confirmation operation aiming at the image of the first object, responds to an expression generation instruction triggered by the confirmation operation, directly and automatically generates and presents the expressions corresponding to the first object and corresponding to each expression template in the 'hot' expression template set; and simultaneously presenting the image of the first object, and identifying the selected state of the image of the first object.
In some embodiments, the terminal may generate the first expression based on the image of the first object by: identifying an image of a first object to obtain a face area of the first object; respectively fusing the facial area of the first object with the facial area of each expression template in the first expression template set to obtain a first expression fused with the facial features of the first object and corresponding to each expression template; and presenting the obtained first expressions.
In some embodiments, the terminal may fuse the face region of the first object and the face regions of the respective expression templates by: identifying a facial region of a first subject; performing edge smoothing treatment on the face area, and performing contrast enhancement treatment on the facial region of five sense organs to obtain a treated face area; and processing the facial area of each expression template in the first expression template set according to the processed facial area to obtain a first expression fused with the facial features of the first object and corresponding to each expression template.
In practical application, firstly, a face recognition technology may be adopted to perform face recognition on an acquired image of a first object to obtain a face region of the first object, and secondly, perform facial region recognition on the obtained face region to determine a facial region of the first object. Then, contrast enhancement processing is carried out on the five sense organ region, specifically, according to the average brightness of the face and the recognized five sense organ region, the weight of the region with higher brightness and the unimportant non-five sense organ region is reduced to improve the transparency of the non-five sense organ region, and the color of the region with lower brightness (namely the shadow is heavier) and the color of the five sense organ region are deepened to reduce the transparency; and then, carrying out edge smoothing treatment on the facial area, specifically, adding a transition effect on the edge part of the five sense organ area, and adding a feathering effect around the facial area, so that the effect is more natural when the facial area is fused to the expression template.
And processing the facial regions of the first expression templates in the first expression template set according to the processed facial regions respectively, so as to obtain first expressions corresponding to the expression templates fused with the facial features of the first object. In practical applications, when the face area of each expression template is processed according to the processed face area, the processed face area may be added to the face area of each expression template; the face area of each expression template may be replaced by the processed face area.
Exemplarily, referring to fig. 9, fig. 9 is a schematic flowchart of fusion generation of a first expression provided in an embodiment of the present application. Here, first, a face region and a facial region of the first object are identified, an edge smoothing process is performed on the face region, and a contrast enhancement process is performed on the facial region to obtain a processed face region. When the facial area and the expression template are fused, the facial area in the expression template can be removed firstly to obtain an expression template without the facial area, and the processed expression template is superposed with the transparent layer provided with the animation key frame to manufacture an expression file in a PAG format. And finally, importing the processed facial region of the first object into the facial region with the expression in the PAG format to obtain the expression fused with the facial features of the first object.
In some embodiments, the terminal may present the image of the first object by: presenting a first thumbnail of an image of a first object; in this way, after the first expression corresponding to the first object is generated, when the first thumbnail is focused, the first expression corresponding to each expression template in the first expression template set is presented.
In practical applications, after acquiring the image of the first object, the terminal extracts the face region of the first object from the image of the first object, generates a thumbnail of the image of the first object based on the extracted face region, and presents the thumbnail. Exemplarily, referring to fig. 10, fig. 10 is a schematic diagram of presenting an image of a first object through a thumbnail provided by an embodiment of the present application; here, on the right side of the image acquisition function entry "+", the image of the first object (i.e., the extracted face area of the first object) is presented by way of a thumbnail, and when the thumbnail of the first object is selected by the user, the terminal presents the thumbnail of the focused first object in a frame-selected form in response to the user's selection operation on the thumbnail of the first object, and at the same time, presents the expression corresponding to the first object corresponding to each expression template in the first expression template set.
In some embodiments, in the same expression template set, the terminal may implement switching of expressions corresponding to images of different objects by: presenting a second thumbnail of the image of the second object, the second thumbnail being obtained based on the image acquisition function entry; switching the first thumbnail in focus to the second thumbnail in focus in response to a focus instruction for the second thumbnail; responding to an expression generation instruction aiming at the first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set; and the second expression is obtained by fusing the image of the second object and the expression template in the first expression template set.
Here, the user may also continue to acquire the image of the second object after acquiring the image of the first object through the image acquisition function portal. After the terminal acquires the image of the second object, the image of the second object can be presented in a thumbnail mode. When a focusing instruction which is triggered by a user through clicking operation and aims at the second thumbnail is received, responding to the focusing instruction, switching the first thumbnail to the second thumbnail to be focused, responding to an expression generation instruction which is triggered by the focusing operation of the thumbnail aiming at the image of the second object, fusing the image of the second object and each expression template in the first expression template set, and obtaining and displaying the expression which corresponds to each expression template in the first expression template set and corresponds to the second object. In this way, the expression corresponding to different objects is presented based on the switching between the thumbnails of the different objects.
Illustratively, referring to fig. 11, fig. 11 is a schematic diagram illustrating switching between presenting a first expression and a second expression according to an embodiment of the present application. Here, the first expression template set is a selected "hot" expression template set, and when the terminal acquires an image of the second object through an image acquisition function portal, such as an image acquisition function portal, a second thumbnail of the second object is presented; when a focusing instruction for the second thumbnail triggered by clicking operation by a user is received, switching the first thumbnail to the second thumbnail to be focused in response to the focusing instruction, namely adjusting the frame-selected first thumbnail to the frame-selected second thumbnail as shown in fig. 11; therefore, the terminal can receive an expression generation instruction triggered by focusing operation on the image of the second object, fuse the image of the second object and each expression template in the 'hot' expression template set, and obtain and present the expression corresponding to the second object and corresponding to each expression template in the 'hot' expression template set.
Specifically, the image thumbnail of the first object in fig. 11 (1) is focused, and it is presented with the facial features of the first object fused thereto corresponding to the expression of the first object (see shown in the dashed box); when the image thumbnail of the second object is focused, the presented expression corresponding to the first object is switched to the presented expression corresponding to the second object, see fig. 11 (2), and at this time, the expression corresponding to the second object is presented with the facial features of the second object merged (see the dashed box).
In some embodiments, the terminal may generate an expression corresponding to each expression template in the second expression template set by: responding to the selection operation aiming at the second expression template set, and presenting each expression template contained in the second expression template set; responding to an expression generation instruction aiming at the second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set; and the third expression is obtained based on the fusion of the image of the first object and the expression template in the second expression template set.
Here, after the terminal generates and presents the first expression corresponding to each expression template in the first expression template set, the terminal may also generate the expression corresponding to each expression template in the other expression template sets. When receiving a selection operation of a user for a second expression template set, the terminal responds to the selection operation and presents each expression template contained in the second expression template set; when an expression generation instruction for the second expression template set is received, for example, the expression generation instruction triggered by focusing operation on the image of the first object is received, the image of the first object and each expression template in the second expression template set are fused to obtain and present the expression corresponding to each expression template in the second expression template set and the first object.
Exemplarily, referring to fig. 12, fig. 12 is a schematic flowchart for generating each third expression corresponding to the second expression template set according to the embodiment of the present application. Here, the terminal presents each expression template contained in the "do" expression template set in response to a selection operation of the user on the "do" expression template set of the second expression template set, wherein each expression template is in an original form; when a focusing operation of a user on an image of a first object is received, in response to an expression generation instruction which is triggered based on the focusing operation and aims at a 'do' expression template set, the image of the first object and the expression templates in the 'do' expression template set are fused, expressions corresponding to the 'do' expression templates in the 'do' expression template set are obtained and displayed, and the obtained expressions are fused with facial features of the first object.
By applying the embodiment of the application, in response to an expression editing instruction triggered by a user, acquiring the image of the first object through the image acquisition function inlet, fusing the image of the first object with the expression templates in the expression template set, wherein the expression template set comprises a plurality of expression templates, and when receiving an expression generation instruction, generating the expressions of the expression templates in the corresponding expression template set at one time; therefore, expressions fused with user characteristics can be generated in batches, and interestingness of the user chatting the fighting picture is increased.
Next, by taking an example that the terminal runs the instant messaging client to generate an expression fused with the user image, the expression generation method provided by the embodiment of the present application is continuously described. Referring to fig. 13, fig. 13 is a schematic flowchart of a method for generating an expression provided in the embodiment of the present application, where the method for generating an expression provided in the embodiment of the present application includes:
step 1301: and the terminal runs the instant communication client and presents a session interface containing a session function bar.
The terminal is provided with an instant communication client, and a conversation interface for a user to input and send a conversation message is presented by operating the instant communication client, wherein the conversation interface comprises a conversation function bar. Wherein, various function items are presented in the conversation function bar, such as a voice call function item, a video call function item, etc.
Step 1302: and receiving an expression editing instruction based on the conversation function bar, and presenting an image acquisition function entry.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with the user image. In practical application, the terminal can trigger the expression editing instruction in the following way: the method comprises the steps that an expression editing function item is presented in a conversation function bar, and a terminal responds to trigger operation aiming at the expression editing function item and can receive an expression editing instruction;
or, an expression adding function item is presented in the session function bar, the terminal presents an expression selection area containing at least one expression in response to a trigger operation for the expression adding function item, and presents an expression editing entry in the expression selection area, and an expression editing instruction can be received in response to the trigger operation for the expression editing entry.
And after receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function inlet for acquiring the user image.
Step 1303: and presenting an image acquisition interface in response to the triggering operation aiming at the image acquisition function entrance.
Step 1304: and acquiring an image of the first object and synchronizing the image to the server in response to an image acquisition instruction triggered based on the image acquisition interface.
Step 1305: and receiving a template set selection operation, and determining the expression template set selected by the template set selection operation as a first expression template set.
Step 1306: and responding to an expression generation instruction triggered by the template set selection operation, and sending the expression generation instruction aiming at the first expression template set to the server.
Step 1307: the server receives an expression generation instruction for the first expression template set, and identifies a face region of the image of the first object and a facial region of the five sense organs of the face region of the first object.
Step 1308: and carrying out contrast enhancement treatment on the five sense organ region, and carrying out edge smoothing treatment on the face region to obtain a treated face region.
Step 1309: and adding the processed facial area to the facial area of each expression template in the first expression template set to obtain the expression corresponding to each expression template and fused with the facial features of the first object, and returning to the terminal.
Step 1310: and the terminal receives and presents the expressions which correspond to the expression templates in the first expression template set and are fused with the facial features of the first object.
Next, by taking an example that the terminal runs the instant messaging client to generate the expression fused with the user image, the expression generation method provided by the embodiment of the present application is continuously described. Referring to fig. 14, fig. 14 is a schematic flow chart of a method for generating an expression provided in the embodiment of the present application, where the method for generating an expression provided in the embodiment of the present application includes:
step 1401: and the terminal runs the instant communication client and presents a session interface containing a session function bar.
The terminal is provided with an instant communication client, and a conversation interface for a user to input and send a conversation message is presented by operating the instant communication client, wherein the conversation interface comprises a conversation function bar. Wherein, various function items are presented in the conversation function bar, such as a voice call function item, a video call function item, etc.
Step 1402: and receiving an expression editing instruction based on the conversation function bar, presenting an image acquisition function entry and presenting at least one expression template set.
The expression template set comprises at least two expression templates.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with the user image. In practical application, the terminal can trigger the expression editing instruction in the following way: the method comprises the steps that an expression editing function item is presented in a conversation function bar, and a terminal responds to trigger operation aiming at the expression editing function item and can receive an expression editing instruction;
or, an expression adding function item is presented in the session function bar, the terminal presents an expression selection area containing at least one expression in response to a trigger operation for the expression adding function item, and presents an expression editing entry in the expression selection area, and an expression editing instruction can be received in response to the trigger operation for the expression editing entry.
And after receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function inlet for acquiring the user image.
Step 1403: and receiving a template set selection operation, and determining the expression template set selected by the template set selection operation as a first expression template set.
Step 1404: and presenting an image acquisition interface in response to the triggering operation aiming at the image acquisition function entrance.
Step 1405: and acquiring an image of the first object and synchronizing the image to the server in response to an image acquisition instruction triggered based on the image acquisition interface.
Step 1406: the terminal presents the image of the first object in a thumbnail mode.
Step 1407: and receiving and responding to a focusing operation of the image of the first object and a triggered expression generation instruction, and sending the expression generation instruction aiming at the first expression template set to a server.
Step 1408: the server receives an expression generation instruction for the first expression template set, and identifies a face region of the image of the first object and a facial region of the five sense organs of the face region of the first object.
Step 1409: and carrying out contrast enhancement treatment on the five sense organ region, and carrying out edge smoothing treatment on the face region to obtain a treated face region.
Step 1410: and adding the processed facial area to the facial area of each expression template in the first expression template set to obtain the expression corresponding to each expression template and fused with the facial features of the first object, and returning to the terminal.
Step 1411: and the terminal receives and presents the expressions which correspond to the expression templates in the first expression template set and are fused with the facial features of the first object.
Next, by taking an example that the terminal runs the instant messaging client to generate the expression fused with the user image, the expression generation method provided by the embodiment of the present application is continuously described. Referring to fig. 15, fig. 15 is a schematic flow chart of a method for generating an expression provided in the embodiment of the present application, where the method for generating an expression provided in the embodiment of the present application includes:
step 1501: and the terminal runs the instant communication client and presents a session interface containing a session function bar.
The terminal is provided with an instant communication client, and a conversation interface for a user to input and send a conversation message is presented by operating the instant communication client, wherein the conversation interface comprises a conversation function bar. Wherein, various function items are presented in the conversation function bar, such as a voice call function item, a video call function item, etc.
Step 1502: and receiving an expression editing instruction based on the conversation function bar, presenting an image acquisition function entry and presenting at least one expression template set.
The expression template set comprises at least two expression templates.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with the user image. In practical application, referring to fig. 5A-B specifically, the terminal may receive an expression editing instruction triggered by a user in the following manner: the method comprises the steps that an expression editing function item is presented in a conversation function bar, and a terminal responds to trigger operation aiming at the expression editing function item and can receive an expression editing instruction;
or, an expression adding function item is presented in the session function bar, the terminal presents an expression selection area containing at least one expression in response to a trigger operation for the expression adding function item, and presents an expression editing entry in the expression selection area, and an expression editing instruction can be received in response to the trigger operation for the expression editing entry.
And after receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function inlet for acquiring the user image.
Step 1503: and presenting an image acquisition interface containing a shooting key in response to the triggering operation aiming at the image acquisition function inlet.
Here, the image acquisition function entry is an image capture function entry, and the terminal presents an image capture interface including a shooting key in response to a trigger operation for the image capture function entry. The image acquisition interface can also present shooting prompt information for prompting the user to adjust the shooting posture, the shooting angle, the shooting position and the like, and referring to fig. 6A, the image acquisition interface presents a face function frame for prompting the user to adjust the shooting position and simultaneously presents text prompts of ' please face the lens and ' find the position with better light ' and the like.
Simultaneously at user's shooting in-process, when catching user's face, the expression frame that is used for showing expression preview effect is still presented in the image acquisition interface, for example round face cat ear expression pendant. When the terminal collects the images of the user, the facial area and the facial region of the face are recognized in real time, the expression frame is added to the corresponding facial area of the user, and the display of the preview effect of the expression to be generated is achieved. Here, a text prompt or the like may also be presented that prompts the user to make a funny expression.
In practical applications, the image obtaining function entry may also be an image selection entry through which a user may enter an image selection interface for the user to select a first object image from an album, which may be a collection of various images stored in the terminal, such as an album, a QQ picture, and the like. The terminal responds to the triggering operation of a user for the image acquisition function entrance and presents an image selection interface; when an image selection operation triggered by a user based on an image selection interface is received, the image selected by the image selection operation is used as the image of the first object. In the present embodiment, the selected image is an image having a facial feature.
Step 1504: and responding to the triggering operation of the shooting key, acquiring and presenting the image of the first object, and synchronizing the image of the first object to the server.
Here, the captured image of the first object may be a video image having a facial feature.
Step 1505: and receiving a confirmation operation aiming at the acquired image of the first object, responding to an expression generation instruction triggered by the confirmation operation, and sending the expression generation instruction aiming at the first expression template set to the server.
Here, when a confirmation operation of the user for the acquired image of the first object is received, that is, the acquired image of the first object is focused by default, at this time, in response to an expression generation instruction for the selected first expression template set, which is triggered by the confirmation operation, the expression generation instruction is sent to the background server to notify the background server, and an expression corresponding to each expression template in the first expression template set is generated based on the image of the first object.
See, for example, fig. 8C. Here, the user first selects a "hot" expression template set as a first expression template set; after the user finishes shooting the image of the first object through the image acquisition interface or finishes selecting the image of the first object based on the image selection interface, clicking a 'confirmation' button to perform confirmation operation on the image of the first object; the terminal receives the confirmation operation of the image aiming at the first object, responds to the expression generation instruction triggered by the confirmation operation, and sends the expression generation instruction aiming at the first expression template set to the server so as to inform the server to generate the expression corresponding to each expression template in the first expression template set based on the acquired image of the first object.
Step 1506: the server receives an expression generation instruction for the first expression template set, and identifies a face region of the image of the first object and a facial region of the five sense organs of the face region of the first object.
Step 1507: and carrying out contrast enhancement treatment on the five sense organ region, and carrying out edge smoothing treatment on the face region to obtain a treated face region.
In practical application, firstly, the acquired image of the first object is subjected to face recognition to obtain a face region of the first object, and secondly, the obtained face region is subjected to facial region recognition to determine a facial region of the first object. Then, contrast enhancement processing is carried out on the five sense organ region, specifically, according to the average brightness of the face and the recognized five sense organ region, the weight of the region with higher brightness and the unimportant non-five sense organ region is reduced to improve the transparency of the non-five sense organ region, and the color of the region with lower brightness (heavier shadow) and the color of the five sense organ region are deepened to reduce the transparency; and then, carrying out edge smoothing treatment on the facial area, specifically, increasing a transition effect on the edge part of the five sense organ area, and increasing a feathering effect around the facial area, so that the effect is more natural when the facial area is fused into an expression template.
Step 1508: and adding the processed facial area to the facial area of each expression template in the first expression template set to obtain the expression corresponding to each expression template and fused with the facial features of the first object, and returning to the terminal.
And after performing contrast enhancement processing and edge smoothing processing on the face area, obtaining a processed face area, and respectively adding the processed face area to the face area of each expression template in the first expression template set, so as to obtain a first expression fused with the facial features of the first object and corresponding to each expression template. Specifically, the processed facial region of the first object may be added to each expression template, such as a PAG template, where the PAG template defines how the size, position, and angle of the facial region of the first object are merged into each expression template.
The expression template can be a dynamic expression template or a static expression template, so that the fused expressions can comprise dynamic expressions or static expressions.
Step 1509: and the terminal receives and presents the expressions which correspond to the expression templates in the first expression template set and are fused with the facial features of the first object.
Here, the user may realize the transmission of the emoticon message by clicking the corresponding emoticon.
The expression obtained based on this may be a dynamic expression in which the facial features of the user are fused, or may be a static expression in which the facial features of the user are fused, for example, by fusing the facial features of the user into an exaggerated head portrait expression drawn by hand.
In practical application, the expression obtained by fusion can be black and white or colored, and interesting characters and the like can be carried, so that the user experience of fighting pictures during chatting is further improved.
Step 1510: and the terminal acquires and presents the image of the second object through the image acquisition function inlet.
Here, the image of the second object may also be presented by way of a thumbnail.
Step 1511: and in response to a focusing instruction for the thumbnail of the second object, switching the thumbnail of the first object to be focused to the thumbnail of the second object to be focused, and sending an expression generation instruction for the second object corresponding to the first expression template set to the server.
Step 1512: and the server receives the expression generation instruction, fuses the images of the second object and the expression templates in the first expression template set to obtain the expressions corresponding to the second object and corresponding to the expression templates in the first expression template set, and returns the expressions to the terminal.
Step 1513: and switching the presented expression corresponding to the first object to present the expression corresponding to the second object.
Here, presentation of expressions corresponding to different objects is achieved based on focus switching between thumbnails for the different objects. Referring to fig. 11, here, the image thumbnail of the first object in fig. 11 (1) is focused, and at this time, an expression corresponding to the first object is presented, with the facial features of the first object fused (see shown in the dashed box); when the image thumbnail of the second object is focused, the presented expression corresponding to the first object is switched to the presented expression corresponding to the second object, see fig. 11 (2), and at this time, the expression corresponding to the second object is presented with the facial features of the second object merged (see the dashed box).
In this way, the bucket expressions with the facial features of the user are generated in batches for use by the user in conversation chatting.
Continuing with the description of the expression generation apparatus 355 provided in this embodiment of the present application, in some embodiments, the expression generation apparatus may be implemented by a software module. Referring to fig. 16, fig. 16 is a schematic structural diagram of an expression generation apparatus 355 provided in the embodiment of the present application, where the expression generation apparatus 355 provided in the embodiment of the present application includes:
a first presenting module 3551, configured to present, in response to an expression editing instruction, an image obtaining function entry and present at least one expression template set, where the expression template set includes at least two expression templates;
an obtaining module 3552 configured to obtain and present an image of the first object in response to a triggering operation for the image obtaining function portal;
a second presenting module 3553, configured to present a first expression corresponding to each expression template in a first expression template set in response to an expression generation instruction for the first expression template set;
and the first expression is obtained by fusing the image of the first object and the expression templates in the first expression template set.
In some embodiments, the apparatus further comprises:
the first receiving module is used for presenting a conversation function bar in a conversation interface and presenting an expression editing function item in the conversation function bar;
and responding to the triggering operation aiming at the expression editing function item, and receiving the expression editing instruction.
In some embodiments, the apparatus further comprises:
the second receiving module is used for presenting a conversation function bar in a conversation interface and presenting an expression adding function item in the conversation function bar;
responding to the triggering operation of the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation aiming at the expression editing entry, and receiving the expression editing instruction.
In some embodiments, the obtaining module 3552 is further configured to present an image capturing interface for capturing an image of the first object in response to a triggering operation for the image obtaining function portal when the image obtaining function portal is an image capturing function portal, and
presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
In some embodiments, the obtaining module 3552 is further configured to present an image capture interface and present at least one of the following in the image capture interface: shooting prompt information and an expression frame for displaying an expression preview effect;
wherein the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In some embodiments, the obtaining module 3552 is further configured to present the image obtaining interface including a facial expression box in response to a trigger operation for the image obtaining function entry when the image obtaining function entry is an image capturing function entry, and
presenting a shooting key for image acquisition in the image acquisition interface;
and acquiring an image corresponding to the face of the first object based on the facial expression frame in response to the triggering operation of the shooting key.
In some embodiments, the obtaining module 3552 is further configured to present an image selection interface for selecting an image of a first object from the atlas in response to a triggering operation for the image obtaining function entry when the image obtaining function entry is an image selection entry,
at least two images are presented in the image selection interface;
and in response to an image selection operation triggered based on the image selection interface, presenting an image selected by the image selection operation as an image of the first object.
In some embodiments, the second presenting module 3553 is further configured to identify an image of the first subject, resulting in a facial region of the first subject;
fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression fused with the facial features of the first object and corresponding to each expression template;
presenting each of the obtained first expressions.
In some embodiments, the second presenting module 3553 is further configured to identify a region of five sense organs in the facial region of the first subject;
performing edge smoothing on the face region, and
performing contrast enhancement processing on the five sense organ regions in the face region to obtain a processed face region;
and processing the facial area of each expression template in the first expression template set according to the processed facial area to obtain a first expression fused with the facial features of the first object and corresponding to each expression template.
In some embodiments, the second presenting module 3553 is further configured to receive a template set selection operation, and use the expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generation instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
a third receiving module, configured to receive a template set selection operation, use an expression template set selected by the template set selection operation as the first expression template set, and control a state of the first expression template set to be a selected state;
and the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
a fourth receiving module, configured to receive a focusing operation for an image of the first object;
and responding to an expression generation instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In some embodiments, the obtaining module 3552 is further configured to present a first thumbnail of the image of the first object;
correspondingly, the second presenting module 3553 is further configured to present a first expression corresponding to each expression template in the first expression template set when the first thumbnail is focused.
In some embodiments, the apparatus further comprises:
the third presentation module is used for presenting a second thumbnail of the image of the second object, and the second thumbnail is obtained based on the image acquisition function entrance;
switching the first thumbnail in focus to the second thumbnail in focus in response to a focus instruction for the second thumbnail;
responding to an expression generation instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set;
and the second expression is obtained by fusing the image of the second object and the expression templates in the first expression template set.
In some embodiments, the apparatus further comprises:
the fourth presentation module is used for responding to the selection operation aiming at the second expression template set and presenting each expression template contained in the second expression template set;
responding to an expression generation instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
and the third expression is obtained based on the fusion of the image of the first object and the expression template in the second expression template set.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the expression generation method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for generating the expression provided by the embodiment of the present application is implemented.
In some embodiments, the storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EE PROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories. The computer may be a variety of computing devices including intelligent terminals and servers.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts in a HyperText markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for generating an expression, the method comprising:
responding to an expression editing instruction, presenting an image acquisition function inlet, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
acquiring and presenting an image of a first object in response to a trigger operation for the image acquisition function entry;
responding to an expression generation instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set;
and the first expression is obtained by fusing the image of the first object and the expression templates in the first expression template set.
2. The method of claim 1, wherein prior to presenting the image capture function portal, the method further comprises:
presenting a conversation function bar in a conversation interface, and presenting an expression editing function item in the conversation function bar;
and responding to the triggering operation aiming at the expression editing function item, and receiving the expression editing instruction.
3. The method of claim 1, wherein prior to presenting the image capture function portal, the method further comprises:
presenting a conversation function bar in a conversation interface, and presenting an expression adding function item in the conversation function bar;
responding to the triggering operation of the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation aiming at the expression editing entry, and receiving the expression editing instruction.
4. The method of claim 1, wherein said acquiring and presenting an image of a first object in response to a triggering operation for the image acquisition function portal comprises:
when the image acquisition function inlet is an image acquisition function inlet, responding to the trigger operation aiming at the image acquisition function inlet, presenting an image acquisition interface for acquiring the image of the first object, and presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
5. The method of claim 1, wherein said acquiring and presenting an image of a first object in response to a triggering operation for the image acquisition function portal comprises:
when the image acquisition function entry is an image acquisition function entry, presenting the image acquisition interface containing the facial expression frame in response to a trigger operation for the image acquisition function entry, and presenting the image acquisition interface containing the facial expression frame
Presenting a shooting key for image acquisition in the image acquisition interface;
and acquiring an image corresponding to the face of the first object based on the facial expression frame in response to the triggering operation of the shooting key.
6. The method of claim 1, wherein said acquiring and presenting an image of a first object in response to a triggering operation for the image acquisition function portal comprises:
presenting an image selection interface for selecting an image of a first object from an atlas in response to a trigger operation for the image acquisition function portal when the image acquisition function portal is an image selection portal,
at least two images are presented in the image selection interface;
and in response to an image selection operation triggered based on the image selection interface, presenting an image selected by the image selection operation as an image of the first object.
7. The method of claim 1, wherein said presenting a first expression corresponding to each expression template in the first set of expression templates comprises:
identifying an image of the first object, obtaining a face area of the first object;
fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression fused with the facial features of the first object and corresponding to each expression template;
presenting each of the obtained first expressions.
8. The method of claim 7, wherein fusing the facial region of the first object with the facial regions of the expression templates in the first expression template set respectively to obtain the first expression fused with the facial features of the first object corresponding to each expression template comprises:
identifying a region of five sense organs in a facial region of the first subject;
performing edge smoothing on the face region, and
performing contrast enhancement processing on the five sense organ regions in the face region to obtain a processed face region;
and processing the facial area of each expression template in the first expression template set according to the processed facial area to obtain a first expression fused with the facial features of the first object and corresponding to each expression template.
9. The method of claim 1, wherein presenting a first expression corresponding to each expression template in a first expression template set in response to an expression generation instruction for the first expression template set comprises:
receiving a template set selection operation, and taking the expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generation instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
10. The method of claim 1, wherein after the presenting at least one expression template set, the method further comprises:
receiving a template set selection operation, taking the expression template set selected by the template set selection operation as the first expression template set, and controlling the presented state of the first expression template set to be a selected state;
and the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
11. The method of claim 1, wherein after the acquiring and presenting the image of the first object, the method further comprises:
receiving a focusing operation for an image of the first object;
and responding to an expression generation instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
12. The method of claim 1, wherein said presenting the image of the first object comprises:
presenting a first thumbnail of an image of the first object;
correspondingly, the presenting the first expression corresponding to each expression template in the first expression template set includes:
and when the first thumbnail is focused, presenting a first expression corresponding to each expression template in the first expression template set.
13. The method of claim 12, wherein after the presenting the first expression corresponding to each expression template in the first set of expression templates, the method further comprises:
presenting a second thumbnail of an image of a second object, the second thumbnail being obtained based on the image acquisition function entry;
switching the first thumbnail in focus to the second thumbnail in focus in response to a focus instruction for the second thumbnail;
responding to an expression generation instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set;
and the second expression is obtained by fusing the image of the second object and the expression templates in the first expression template set.
14. An apparatus for generating an expression, the apparatus comprising:
the first presentation module is used for responding to an expression editing instruction, presenting an image acquisition function entrance and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
the acquisition module is used for responding to the triggering operation aiming at the image acquisition function entrance, and acquiring and presenting the image of the first object;
the second presentation module is used for responding to an expression generation instruction aiming at the first expression template set and presenting first expressions corresponding to the expression templates in the first expression template set;
and the first expression is obtained by fusing the image of the first object and the expression templates in the first expression template set.
15. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the method of generating an expression according to any one of claims 1 to 13 when executing the executable instructions stored in the memory.
CN202010378163.9A 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium Active CN111541950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378163.9A CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378163.9A CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111541950A true CN111541950A (en) 2020-08-14
CN111541950B CN111541950B (en) 2023-11-03

Family

ID=71980582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378163.9A Active CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111541950B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001872A (en) * 2020-08-26 2020-11-27 北京字节跳动网络技术有限公司 Information display method, device and storage medium
CN112083866A (en) * 2020-09-25 2020-12-15 网易(杭州)网络有限公司 Expression image generation method and device
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112866798A (en) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
WO2022100124A1 (en) * 2020-11-11 2022-05-19 北京达佳互联信息技术有限公司 Magic table data processing method and electronic device
WO2022156557A1 (en) * 2021-01-22 2022-07-28 北京字跳网络技术有限公司 Image display method and apparatus, device, and medium
CN114913278A (en) * 2021-06-30 2022-08-16 完美世界(北京)软件科技发展有限公司 Expression model generation method and device, storage medium and computer equipment
CN115065835A (en) * 2022-05-20 2022-09-16 广州方硅信息技术有限公司 Live-broadcast expression display processing method, server, electronic equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Picture generation method and device
CN104637078A (en) * 2013-11-14 2015-05-20 腾讯科技(深圳)有限公司 Image processing method and device
WO2016177290A1 (en) * 2015-05-06 2016-11-10 北京蓝犀时空科技有限公司 Method and system for generating and using expression for virtual image created through free combination
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
WO2017152673A1 (en) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Expression animation generation method and apparatus for human face model
CN107578459A (en) * 2017-08-31 2018-01-12 北京麒麟合盛网络技术有限公司 Expression is embedded in the method and device of candidates of input method
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
US20180173942A1 (en) * 2016-12-16 2018-06-21 Samsung Electronics Co., Ltd. Method and apparatus for generating facial expression and training method for generating facial expression
CN108388557A (en) * 2018-02-06 2018-08-10 腾讯科技(深圳)有限公司 Message treatment method, device, computer equipment and storage medium
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
CN109215007A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of image generating method and terminal device
WO2019015522A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Emoticon image generation method and device, electronic device, and storage medium
WO2019142127A1 (en) * 2018-01-17 2019-07-25 Feroz Abbasi Method and system of creating multiple expression emoticons
EP3531377A1 (en) * 2018-02-23 2019-08-28 Samsung Electronics Co., Ltd. Electronic device for generating an image including a 3d avatar reflecting face motion through a 3d avatar corresponding to a face

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104637078A (en) * 2013-11-14 2015-05-20 腾讯科技(深圳)有限公司 Image processing method and device
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Picture generation method and device
WO2016177290A1 (en) * 2015-05-06 2016-11-10 北京蓝犀时空科技有限公司 Method and system for generating and using expression for virtual image created through free combination
WO2017152673A1 (en) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Expression animation generation method and apparatus for human face model
US20180173942A1 (en) * 2016-12-16 2018-06-21 Samsung Electronics Co., Ltd. Method and apparatus for generating facial expression and training method for generating facial expression
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
WO2019015522A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Emoticon image generation method and device, electronic device, and storage medium
CN107578459A (en) * 2017-08-31 2018-01-12 北京麒麟合盛网络技术有限公司 Expression is embedded in the method and device of candidates of input method
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
WO2019142127A1 (en) * 2018-01-17 2019-07-25 Feroz Abbasi Method and system of creating multiple expression emoticons
CN108388557A (en) * 2018-02-06 2018-08-10 腾讯科技(深圳)有限公司 Message treatment method, device, computer equipment and storage medium
EP3531377A1 (en) * 2018-02-23 2019-08-28 Samsung Electronics Co., Ltd. Electronic device for generating an image including a 3d avatar reflecting face motion through a 3d avatar corresponding to a face
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
CN109215007A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of image generating method and terminal device
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋红;黄小川;王树良;: "多表情人脸肖像的自动生成", 电子学报, no. 08 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001872A (en) * 2020-08-26 2020-11-27 北京字节跳动网络技术有限公司 Information display method, device and storage medium
US11922721B2 (en) 2020-08-26 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Information display method, device and storage medium for superimposing material on image
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112083866A (en) * 2020-09-25 2020-12-15 网易(杭州)网络有限公司 Expression image generation method and device
WO2022100124A1 (en) * 2020-11-11 2022-05-19 北京达佳互联信息技术有限公司 Magic table data processing method and electronic device
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
WO2022105862A1 (en) * 2020-11-20 2022-05-27 北京字节跳动网络技术有限公司 Method and apparatus for video generation and displaying, device, and medium
CN112866798B (en) * 2020-12-31 2023-05-05 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
WO2022143253A1 (en) * 2020-12-31 2022-07-07 北京字跳网络技术有限公司 Video generation method and apparatus, device, and storage medium
CN112866798A (en) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
WO2022156557A1 (en) * 2021-01-22 2022-07-28 北京字跳网络技术有限公司 Image display method and apparatus, device, and medium
CN114816599A (en) * 2021-01-22 2022-07-29 北京字跳网络技术有限公司 Image display method, apparatus, device and medium
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
US12106410B2 (en) 2021-01-22 2024-10-01 Beijing Zitiao Network Technology Co., Ltd. Customizing emojis for users in chat applications
CN114913278A (en) * 2021-06-30 2022-08-16 完美世界(北京)软件科技发展有限公司 Expression model generation method and device, storage medium and computer equipment
CN115065835A (en) * 2022-05-20 2022-09-16 广州方硅信息技术有限公司 Live-broadcast expression display processing method, server, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111541950B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN111541950B (en) Expression generating method and device, electronic equipment and storage medium
JP2021517696A (en) Video stamp generation method and its computer program and computer equipment
CN113099298B (en) Method and device for changing virtual image and terminal equipment
KR102546016B1 (en) Systems and methods for providing personalized video
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11558561B2 (en) Personalized videos featuring multiple persons
CN111768478B (en) Image synthesis method and device, storage medium and electronic equipment
CN111371993A (en) Image shooting method and device, computer equipment and storage medium
CN111530086B (en) Method and device for generating expression of game role
KR20230021640A (en) Customize soundtracks and hair styles in editable videos for multimedia messaging applications
CN115035220A (en) 3D virtual digital person social contact method and system
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN108875670A (en) Information processing method, device and storage medium
CN116129006A (en) Data processing method, device, equipment and readable storage medium
JP7502711B1 (en) PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
US20240364839A1 (en) Personalized videos featuring multiple persons
JP2024082674A (en) Information processor, method for processing information, and program
CN118842992A (en) Picture preview method, device, equipment and medium based on virtual special effect template
JP2024083217A (en) Information processing device, information processing method, and program
CN118505862A (en) Method, device, equipment and storage medium for displaying virtual image
CN117170533A (en) Online state-based processing method and device, computer equipment and storage medium
CN117097914A (en) Method and device for processing anchor image in live broadcast picture
CN118660123A (en) Video recording method, device, electronic equipment and readable storage medium
CN117717784A (en) Game scene component generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027414

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant