CN111541950B - Expression generating method and device, electronic equipment and storage medium - Google Patents

Expression generating method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111541950B
CN111541950B CN202010378163.9A CN202010378163A CN111541950B CN 111541950 B CN111541950 B CN 111541950B CN 202010378163 A CN202010378163 A CN 202010378163A CN 111541950 B CN111541950 B CN 111541950B
Authority
CN
China
Prior art keywords
expression
image
presenting
template set
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010378163.9A
Other languages
Chinese (zh)
Other versions
CN111541950A (en
Inventor
刘佳卉
陈柯辰
钟媛
程功凡
佘渡离
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010378163.9A priority Critical patent/CN111541950B/en
Publication of CN111541950A publication Critical patent/CN111541950A/en
Application granted granted Critical
Publication of CN111541950B publication Critical patent/CN111541950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method and a device for generating expressions, electronic equipment and a storage medium; the method comprises the following steps: responding to the expression editing instruction, presenting an image acquisition function entry, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates; acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function portal; responding to an expression generating instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set; the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set; according to the application, the expressions fused with the user characteristics can be generated in batches, and the interest of the chat bucket diagram of the user is enhanced.

Description

Expression generating method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet technologies and artificial intelligence technologies, and in particular, to a method and apparatus for generating expressions, an electronic device, and a storage medium.
Background
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
Along with the continuous development of artificial intelligence technology, the artificial intelligence technology is increasingly applied to social clients such as instant messaging, and when users chat on line through the instant messaging clients, various expressions are frequently used, besides some lovely expressions, the self-contained bucket chart with the funneling attribute is widely used by people, but most expressions are released by designers for downloading and using by the users, so that the expressions sent among the users are easy to be the same, the effect of the bucket chart cannot be achieved, and the individuation is poor. In some techniques for users to compose the graph, the user usually composes according to the photographed picture and a selected expression pendant, and only one expression can be made by photographing the picture at a time, so that the making efficiency is low, the requirement of the user for fast-paced graph is difficult to be satisfied, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for generating expressions, which can generate expressions fused with user characteristics in batches and enhance the interestingness of chat charts of users.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for generating expressions, which comprises the following steps:
Responding to the expression editing instruction, presenting an image acquisition function entry, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function portal;
responding to an expression generating instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set;
the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set.
In the above aspect, the presenting an image acquisition interface for acquiring an image of a first object includes:
presenting an image acquisition interface, and presenting at least one of the following in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect;
the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In the above solution, after the presenting the first expression corresponding to each expression template in the first expression template set, the method further includes:
Responding to a selection operation for a second expression template set, and presenting each expression template contained in the second expression template set;
responding to an expression generating instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
the third expression is obtained based on fusion of the image of the first object and the expression templates in the second expression template set.
The embodiment of the application also provides a device for generating the expression, which comprises the following steps:
the first presentation module is used for responding to the expression editing instruction, presenting an image acquisition function entry and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
an acquisition module for acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function portal;
the second presentation module is used for responding to the expression generation instruction aiming at the first expression template set and presenting the first expression corresponding to each expression template in the first expression template set;
the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set.
In the above scheme, the device further includes:
the first receiving module is used for presenting a conversation function column in a conversation interface and presenting expression editing function items in the conversation function column;
and responding to the triggering operation of the expression editing function item, and receiving the expression editing instruction.
In the above scheme, the device further includes:
the second receiving module is used for presenting a conversation function column in a conversation interface and presenting expression adding function items in the conversation function column;
in response to a triggering operation for the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation of the expression editing entrance, and receiving the expression editing instruction.
In the above aspect, the acquiring module is further configured to, when the image acquiring function portal is an image acquiring function portal, respond to a triggering operation for the image acquiring function portal, present an image acquiring interface for acquiring an image of the first object, and
presenting shooting keys for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
In the above solution, the acquiring module is further configured to present an image acquisition interface, and present at least one of the following in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect;
the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In the above aspect, the acquiring module is further configured to, when the image acquiring function portal is an image acquiring function portal, respond to a triggering operation for the image acquiring function portal, present the image acquiring interface including a facial expression frame, and
presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring an image of the face of the first object based on the facial expression frame.
In the above-mentioned aspect, the acquisition module is further configured to, when the image acquisition function entry is an image selection entry, present an image selection interface for selecting an image of the first object from the atlas in response to a trigger operation for the image acquisition function entry,
And presenting at least two images in the image selection interface;
and responding to an image selection operation triggered based on the image selection interface, and presenting the image selected by the image selection operation as the image of the first object.
In the above solution, the second presenting module is further configured to identify an image of the first object, to obtain a face area of the first object;
fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression of each expression template corresponding to the facial features of the first object;
and presenting each obtained first expression.
In the above aspect, the second presenting module is further configured to identify a five-sense organ area in a face area of the first object;
performing edge smoothing on the face region, and
performing contrast enhancement processing on the five-sense organ region in the face region to obtain a processed face region;
and processing the facial areas of the expression templates in the first expression template set according to the processed facial areas to obtain first expressions of the expression templates, which are fused with the facial features of the first object and correspond to the facial features of the first object.
In the above scheme, the second presenting module is further configured to receive a template set selection operation, and use the expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generating instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In the above scheme, the device further includes:
the third receiving module is used for receiving a template set selection operation, taking the expression template set selected by the template set selection operation as the first expression template set, and controlling the state of the presented first expression template set to be a selected state;
the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In the above scheme, the device further includes:
a fourth receiving module for receiving a focusing operation for an image of the first object;
and responding to the expression generating instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In the above solution, the obtaining module is further configured to present a first thumbnail of the image of the first object;
correspondingly, the second presenting module is further configured to present a first expression corresponding to each expression template in the first expression template set when the first thumbnail is focused.
In the above scheme, the device further includes:
a third presenting module, configured to present a second thumbnail of an image of a second object, where the second thumbnail is obtained based on the image obtaining function entry;
switching the first thumbnail to be focused to the second thumbnail to be focused in response to a focusing instruction for the second thumbnail;
responding to an expression generating instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set;
the second expression is obtained based on fusion of the image of the second object and the expression templates in the first expression template set.
In the above scheme, the device further includes:
a fourth presenting module, configured to respond to a selection operation for a second expression template set, and present each expression template included in the second expression template set;
Responding to an expression generating instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
the third expression is obtained based on fusion of the image of the first object and the expression templates in the second expression template set.
The embodiment of the application also provides electronic equipment, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the expression generating method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application also provides a computer readable storage medium which stores executable instructions, and when the executable instructions are executed by a processor, the expression generating method provided by the embodiment of the application is realized.
The embodiment of the application has the following beneficial effects:
responding to an expression editing instruction triggered by a user, acquiring an image of a first object through an image acquisition function entry, and fusing the image of the first object with expression templates in a presented expression template set, wherein the expression template set comprises a plurality of expression templates, and when an expression generation instruction is received, the expression of each expression template in the corresponding expression template set can be generated at one time; therefore, the expression fused with the user characteristics can be generated in batches, the interestingness of the chat bucket diagram of the user is increased, and the viscosity of the user to the product is improved.
Drawings
Fig. 1 is a flowchart illustrating a method of generating an expression provided in the related art;
fig. 2 is a schematic diagram of an implementation scenario of a method for generating expressions according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 5A is a flowchart of triggering an expression editing instruction according to an embodiment of the present application;
fig. 5B is a second trigger flowchart of an expression editing instruction provided in an embodiment of the present application;
FIG. 6A is a flowchart for acquiring an image of a first object according to an embodiment of the present application;
FIG. 6B is a second flowchart of acquiring an image of a first object according to an embodiment of the present application;
FIG. 6C is a third flowchart of acquiring an image of a first object according to an embodiment of the present application;
fig. 7A is a schematic diagram of selecting an expression template set according to an embodiment of the present application;
FIG. 7B is a schematic diagram of an expression corresponding to a first object according to an embodiment of the present application;
fig. 8A is a schematic diagram of a triggering flow of an expression generating instruction according to an embodiment of the present application;
fig. 8B is a second schematic diagram of a triggering flow of an expression generating instruction according to an embodiment of the present application;
fig. 8C is a schematic diagram III of a triggering flow of an expression generating instruction according to an embodiment of the present application;
Fig. 9 is a schematic flow chart of fusion generation of a first expression according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an image of a first object presented by a thumbnail provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a switch between a first expression and a second expression according to an embodiment of the present application;
fig. 12 is a schematic flow chart of generating each third expression corresponding to the second expression template set according to an embodiment of the present application;
fig. 13 is a flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 14 is a flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 15 is a flowchart of a method for generating an expression according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an expression generating apparatus according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
2) Self-timer expression: the expression made by using the photo or video of the user, with the face picture of the user's individual.
3) Self-timer pendant: the designed material patterns are superimposed on the shooting picture during the self-shooting, so that the self-shooting is more attractive or interesting
4) And (3) a bucket diagram: when group chat or private chat, users send fun expressions to entertain each other, commonly used in social software.
5) Bucket chart condition: the mood of a person is expressed by a simple hand drawing or an expression made up of an exaggerated head portrait, and the face is usually synthesized by using the face of a celebrity (such as a librarian) to be whitened.
6) Dynamic expression: and the expression is made of multiple frames of pictures and can move.
7) The face recognition technology is based on the facial features of a person, firstly judging whether the face exists in an input face image or video stream, and if the face exists, further giving the position and the size of each face and the position information of each main facial organ. The Computer Vision technology (CV) is an important branch of biological feature recognition, and Computer Vision is a science for researching how to make a machine "see", and further means that a camera and a Computer are used for replacing human eyes to perform machine Vision such as recognition, tracking and measurement on a target, and further performing graphic processing, so that the Computer is processed into an image which is more suitable for human eyes to observe or transmit to an instrument to detect. Computer vision techniques typically include image processing, image recognition, video processing, video content/behavior recognition, and the like.
In the related art, when an expression carrying an image of a user is generated, referring to fig. 1, fig. 1 is a flowchart of a method for generating an expression provided in the related art. Here, the terminal captures a still picture including the face of the user through the camera function, and then generates a still expression fused with the face of the user according to the captured still picture and a certain expression pendant selected by the user. Therefore, only one expression can be produced by one shooting, the production efficiency is low, and the requirements of a user on a fast-paced bucket graph are difficult to meet; meanwhile, only static expression can be shot, visual feeling is monotonous, and user experience is poor.
Based on this, the embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for generating an expression, so as to at least solve the above-mentioned problems, and respectively described below.
Based on the above explanation of terms and expressions in the embodiments of the present application, the following first describes an implementation scenario of the expression generating method provided by the embodiments of the present application, referring to fig. 2, fig. 2 is a schematic diagram of an implementation scenario of the expression generating method provided by the embodiments of the present application, and in order to support an exemplary application, an application client, such as an instant messaging client, is disposed on a terminal (including the terminal 200-1 and the terminal 200-2); the terminal is connected to the server 100 through the network 30, and the network 30 may be a wide area network or a local area network, or a combination of the two, and the data transmission is implemented by using a wireless or wired link.
A terminal (e.g., terminal 200-1) for presenting an image acquisition function portal and presenting at least one expression template set in response to an expression editing instruction; acquiring and presenting an image of a first object in response to a trigger operation for an image acquisition function entry; responding to the expression generating instruction aiming at the first expression template set, and sending the expression generating instruction aiming at the first expression template set to a server;
a server 100, configured to receive an expression generating instruction for a first expression template set; responding to the expression generating instruction, fusing to obtain each first expression based on the image of the first object and each expression template in the first expression template set, and returning to the terminal;
a terminal (e.g., terminal 200-1) for receiving and presenting the first expressions corresponding to each expression template in the first expression template set.
In practical application, the server 100 may be a separately configured server supporting various services, or may be configured as a server cluster; the terminal (e.g., terminal 200-1) may be a smart phone, tablet, notebook, etc., various types of user terminals, as well as a wearable computing device, a Personal Digital Assistant (PDA), a desktop computer, a cellular phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these or other data processing devices.
The following describes in detail the hardware structure of the electronic device of the expression generating method provided by the embodiment of the present application, where the electronic device includes, but is not limited to, a server or a terminal. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and the electronic device 300 shown in fig. 3 includes: at least one processor 310, a memory 350, at least one network interface 320, and a user interface 330. The various components in the electronic device 300 are coupled together by a bus system 340. It is understood that the bus system 340 is used to enable connected communications between these components. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 3 as bus system 340.
The processor 310 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, which may be a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 330 includes one or more output devices 331 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 330 also includes one or more input devices 332, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310.
Memory 350 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 350 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 350 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
The operating system 351 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 352 for reaching other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 353 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 331 (e.g., a display screen, speakers, etc.) associated with the user interface 330;
an input processing module 354 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the expression generating apparatus provided in the embodiments of the present application may be implemented in a software manner, and fig. 3 shows an expression generating apparatus 355 stored in a memory 350, which may be software in the form of a program, a plug-in, or the like, including the following software modules: first presentation module 3551, acquisition module 3552, and second presentation module 3553, which are logical, and thus may be arbitrarily combined or further split depending on the functions implemented, the functions of each module will be described below.
In other embodiments, the expression generating apparatus provided in the embodiments of the present application may be implemented by combining software and hardware, and by way of example, the expression generating apparatus provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to perform the expression generating method provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may use one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, comp lex Programmable Logic Device), field programmable gate array (FPGA, field-Programm able Gate Array), or other electronic components.
Based on the implementation scenario of the expression generating method and the description of the electronic device in the embodiment of the present application, the expression generating method provided in the embodiment of the present application is described below. Referring to fig. 4, fig. 4 is a flowchart illustrating a method for generating an expression according to an embodiment of the present application; in some embodiments, the expression generating method may be implemented by a server or a terminal alone or cooperatively, and the method for generating expression provided by the embodiment of the present application includes:
Step 401: and the terminal responds to the expression editing instruction, presents an image acquisition function entry and presents at least one expression template set.
Here, the terminal may be provided with an instant messaging client, and by running the instant messaging client, a session window for the user to perform a session is presented, through which the user may perform instant messaging, such as sending text messages, expression messages, and the like. For the expression, the terminal can present the expression completed by others, and also can present the expression edited by the user, and the user can realize the sending of the expression message by clicking the expression presented by the terminal.
In practical application, if the user has the requirement of editing the expression, the expression can be edited and generated by triggering an expression editing instruction. In the embodiment of the application, a method for generating expressions based on user images is provided, so that the interest of a user in using expressions to conduct conversation communication is improved. After receiving an expression editing instruction triggered by a user, the terminal responds to the expression editing instruction, presents an image acquisition function entry for acquiring an image of the user, and simultaneously presents at least one expression template set for generating an expression by the user, wherein the expression template set comprises at least two expression templates so as to ensure that the user can generate the expression fused with the image of the user in batches. The expression templates may include dynamic expression templates or static expression templates.
In some embodiments, the terminal may trigger the expression editing instruction to implement editing of the expression by the user as follows: presenting a conversation function column in a conversation interface, and presenting expression editing function items in the conversation function column; and responding to the triggering operation aiming at the expression editing function item, and receiving an expression editing instruction.
In practical application, the terminal presents a session function bar in the session interface for a user to input a session message, perform a voice session, send a picture message, and the like. In the embodiment of the application, the conversation function field is also provided with an expression editing function item, and when a user wants to edit and generate an expression wanted by an individual, the expression editing function item can be triggered by clicking and other operations. And the terminal responds to the triggering operation aiming at the expression editing function item to receive an expression editing instruction triggered by a user, so that an interface containing an image acquisition entrance is presented.
For example, referring to fig. 5A, fig. 5A is a first trigger flowchart of an expression editing instruction provided in an embodiment of the present application. Here, the user clicks the expression editing function item "GIF" icon presented in the conversation function field to trigger an expression editing instruction; and the terminal receives an expression editing instruction in response to clicking operation of the user on the 'GIF' icon button, so that an interface for the user to edit the expression is presented, namely an interface comprising an image acquisition entrance is presented, and the image acquisition interface such as an image shooting interface shown in fig. 5A can be accessed through the image acquisition entrance.
In some embodiments, the terminal may further trigger an expression editing instruction to implement editing of the expression by the user in the following manner: presenting a conversation function column in a conversation interface, and presenting expression adding function items in the conversation function column; in response to a triggering operation for the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area; and responding to the triggering operation aiming at the expression editing entrance, and receiving an expression editing instruction.
In practical application, the terminal can also present an expression adding function item in the conversation function field, respond to the triggering operation aiming at the expression adding function item, present an expression selection area containing various types of expressions, and the expression of the expression selection area can be used for a user to directly send expression information; and simultaneously, an expression editing entry is presented in the expression selection area, and when the terminal receives a trigger operation aiming at the expression editing entry, the terminal responds to the trigger operation to receive an expression editing instruction triggered by a user, so that an interface containing an image acquisition entry is presented.
For example, referring to fig. 5B, fig. 5B is a second trigger flowchart of an expression editing instruction provided in an embodiment of the present application. Here, the terminal presents an expression adding function item "heat map" button in a conversation function field, upon receiving a click operation of the user on the "heat map" button, presents an expression selection area containing various types of expressions such as "immunity+1" expression, "duty washing, wearing mask" expression, and the like in response to the click operation, and presents an expression editing entry "DIY fight map" button in the expression selection area. And the terminal receives an expression editing instruction in response to clicking operation of the user on the DIY fight button, so that an interface for the user to edit the expression is presented, namely an interface containing an image acquisition entrance through which an image acquisition interface, such as an image shooting interface shown in fig. 5B, can be accessed.
Step 402: an image of a first object is acquired and presented in response to a trigger operation for an image acquisition function portal.
After the terminal responds to the expression editing instruction triggered by the user and presents the interface containing the image acquisition portal, the user can trigger the presented image acquisition function portal through clicking and other operations so as to acquire the image of the first object.
In some embodiments, the terminal may acquire the image of the first object by: when the image acquisition function entrance is the image acquisition function entrance, responding to a triggering operation aiming at the image acquisition function entrance, presenting an image acquisition interface for acquiring an image of a first object, and presenting a shooting key for image acquisition in the image acquisition interface; and responding to the triggering operation of the shooting key, and acquiring and presenting an image of the first object.
Further, in some embodiments, at least one of the following may be presented in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect; the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
Here, the image acquisition function portal is an image acquisition function portal, and the terminal presents an image acquisition interface including a photographing key in response to a trigger operation for the image acquisition function portal. The image acquisition interface can also be provided with shooting prompt information for prompting a user to adjust shooting posture, shooting angle, shooting position and the like; meanwhile, the image acquisition interface is also provided with an expression frame for displaying the expression preview effect, such as a self-timer pendant of a round-face cat ear. And the terminal responds to the triggering operation of the user for the shooting key, and acquires and presents the image of the first object.
Referring to fig. 6A, fig. 6A is a flowchart illustrating the acquisition of an image of a first object according to an embodiment of the present application. Here, the image acquisition interface presents a face function box for prompting the user to adjust the shooting position, and simultaneously presents text prompts such as "please face the lens", "find a position with better light", and the like. Meanwhile, in the shooting process of the user, when the face of the user is captured, the image acquisition interface is further provided with an expression frame for displaying the expression preview effect, such as a round-face cat ear expression pendant. And when the terminal responds to the triggering operation of the user for the shooting keys and acquires the images of the user, the facial area and the five sense organs area of the face are identified in real time, and the expression frame is added to the corresponding facial area of the user, so that the preview effect of the expression to be generated is displayed. Here, a text prompt or the like prompting the user to "make a fun expression" may also be presented.
In some embodiments, the terminal may also acquire an image of the first object by: when the image acquisition function portal is an image acquisition function portal, responding to a triggering operation aiming at the image acquisition function portal, presenting an image acquisition interface containing a facial expression frame, and presenting a shooting key for image acquisition in the image acquisition interface; in response to a trigger operation for the photographing key, an image of a face of a corresponding first subject is acquired based on the facial expression frame.
When the image acquisition function entrance is the image acquisition function entrance, the terminal responds to the triggering operation aiming at the image acquisition function entrance, and presents an image acquisition interface comprising shooting keys. Facial expression frames can also be presented in the image acquisition interface. When the terminal recognizes the face of the user, in the acquisition process of the first object image, based on the facial expression frame, acquiring an image of the face of the first object carrying the facial expression frame.
In some embodiments, the terminal may also acquire an image of the first object by: when the image acquisition function portal is an image selection portal, responding to a triggering operation for the image acquisition function portal, presenting an image selection interface for selecting images of a first object from an atlas, and presenting at least two images in the image selection interface; and responding to the image selection operation triggered based on the image selection interface, and presenting the image selected by the image selection operation as the image of the first object.
In practical applications, the image capturing function portal may also be an image selection portal, through which a user may enter an image selection interface for the user to select the first object image from an album, which may be a collection of various images stored in the terminal, such as an album, a WeChat picture, or the like. The terminal responds to the triggering operation of the user aiming at the image acquisition function entrance, and presents an image selection interface; when an image selection operation triggered by a user based on the image selection interface is received, the image selected by the image selection operation is taken as an image of a first object and presented.
For example, referring to fig. 6B, fig. 6B is a second flowchart of acquiring an image of a first object according to an embodiment of the present application. Here, the image acquisition function portal presented by "+" is an image selection portal that the user can make an image selection of the first object by clicking on. The terminal responds to the triggering operation of the image selection entrance, presents at least two images under a target path (such as WeChat, album and the like), responds to the image selection operation triggered by the user at the image selection interface, and takes the 'image 1' selected by the user as the image of the first object.
In practical application, when a user acquires a first object image through the image acquisition function entrance, the terminal can respond to the click operation of the user on the image acquisition function entrance to directly present an interface comprising the image selection entrance and the image acquisition function entrance so as to enable the user to select an optimal image acquisition mode according to personal requirements. Referring to fig. 6C, fig. 6C is a flowchart III of the image acquisition of the first object provided by the embodiment of the present application, where the terminal presents an interface including an image selection portal "select from album" and an image acquisition function portal "take photo" in response to a click operation of the user on the image acquisition function portal, and the user can trigger an image acquisition instruction by clicking the function portal in a required image acquisition manner; if the terminal receives an image acquisition instruction triggered by 'taking a picture' through an image acquisition function entrance, an image acquisition interface for acquiring an image of a first object is presented; and if the terminal receives an image acquisition instruction triggered by the image selection portal 'select from album', an image acquisition interface for selecting the image of the first object from the atlas is presented. And further, respectively acquiring the images of the first object in a corresponding image acquisition mode.
Step 403: and responding to the expression generating instruction aiming at the first expression template set, and presenting the first expression corresponding to each expression template in the first expression template set.
Here, the first expression is obtained based on fusion of expression templates in the first expression template set with respect to the image of the first object. In some embodiments, the user may perform fusion generation of expressions by selecting a set of personally desired expression templates. After presenting at least one expression template set, the terminal can receive a template set selection operation, take the expression template set selected by the template set selection operation as a first expression template set, and control the state of the presented first expression template set to be a selected state; the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In practical application, the terminal receives a template set selection operation executed by a user, and takes the expression template set selected by the template set selection operation as a first expression template set so as to generate a first expression corresponding to each expression template in the first expression template set and corresponding to the first object based on the image of the first object.
For example, referring to fig. 7A, fig. 7A is a schematic diagram illustrating selection of an expression template set according to an embodiment of the present application. Here, the terminal presents the expression template set including "hot", "strange", "lively", and the like, and the user can select a corresponding expression template set, such as the "hot" expression template set, according to personal needs, and the terminal presents each expression template in the "hot" expression template set in response to a selection operation of the user for the "hot" expression template set, and controls the state of the "hot" expression template set to be a selected state. When the expression template set is in a selected state, the terminal can fuse the image of the first object with each expression template in the expression template set according to the acquired image of the first object to obtain each first expression.
In the embodiment of the application, when the facial features of the first object are fused, the facial regions of the expression template are uniformly replaced with the facial regions of the first object, namely, when the state of the facial regions of the first object is 'single-eye closed', the state of the facial regions of the fused expressions is also 'single-eye closed'. Specifically, referring to fig. 7B, fig. 7B is a schematic diagram of an expression corresponding to a first object in an embodiment of the present application, where a state in which an image of the first object acquired by a terminal is a facial area is "closed with a single eye", and at this time, based on the image of the first object, the state in which the facial area of each expression is generated is "closed with a single eye".
In some embodiments, the terminal may trigger the expression generating instruction to present the first expression by: receiving a template set selection operation, and taking the expression template set selected by the template set selection operation as a first expression template set; and responding to an expression generating instruction triggered by the template set selection operation, and presenting the first expression corresponding to each expression template in the first expression template set.
Here, after entering the image acquisition interface, the user may first acquire an image of the first object through the image acquisition function portal, and then the user may select a desired expression template set according to the need. After receiving a template set selection operation triggered by a user, a terminal determines an expression template set selected by the user as a first expression template set; simultaneously, the expression generating instruction triggered by the template set selecting operation is responded, and the expression corresponding to each expression template in the first expression template set and corresponding to the first object is directly generated and presented.
For example, referring to fig. 8A, fig. 8A is a schematic diagram of a triggering flow of an expression generating instruction according to an embodiment of the present application. Here, the user first acquires an image of the first object through the image acquisition function portal; when the terminal receives a template set selection operation, taking the "hot" expression template set selected by the template set selection operation as a first expression template set; and responding to the expression generating instruction triggered by the template set selecting operation, generating and presenting the expression corresponding to the first object of each expression template in the 'hot' expression template set.
In some embodiments, the terminal may further trigger an expression generating instruction to present the first expression by: receiving a focusing operation for an image of a first object; and responding to an expression generating instruction triggered by the focusing operation, and presenting the first expression corresponding to each expression template in the first expression template set.
Here, after the user enters the image acquisition interface, the user may first select a desired first expression template set, and after the user acquires the image of the first object through the image acquisition function entry, the terminal receives a focusing operation, such as a selecting operation, for the image of the first object, and directly generates and presents the expression corresponding to the first object corresponding to each expression template in the first expression template set in response to an expression generation instruction triggered by the focusing operation.
For example, referring to fig. 8B, fig. 8B is a second schematic diagram of a triggering flow of the expression generating instruction provided in the embodiment of the present application. Here, the user first selects a "hot" expression template set as a first expression template set; after a user acquires an image of a first object through an image acquisition function entry, presenting the image of the first image in a thumbnail mode; when the terminal receives a focusing operation which is performed by a user on an image of a first object (namely, a thumbnail of the image of the first object), an expression generating instruction triggered by the focusing operation is responded, and expressions corresponding to the first object in each expression template in the corresponding 'hot' expression template set are generated and presented.
In some embodiments, the terminal may further trigger an expression generating instruction to present the first expression by: receiving a confirmation operation for an image of a first object; and responding to the expression generating instruction triggered by the confirmation operation, and presenting the first expression corresponding to each expression template in the first expression template set.
Here, when the user first selects a desired first expression template set and acquires an image of the first object through the image acquisition interface, for example, a shot image or an image in a selected image set. When the terminal receives a confirmation operation which is carried out by a user for a shot or selected image, the terminal directly generates and presents the expressions corresponding to the first object and corresponding to each expression template in the first expression template set in response to an expression generation instruction triggered by the confirmation operation.
For example, referring to fig. 8C, fig. 8C is a schematic diagram of a triggering flow of an expression generating instruction according to an embodiment of the present application. Here, the user first selects a "hot" expression template set as a first expression template set; after the user finishes shooting the image of the first object through the image acquisition interface or finishes selecting the image of the first object based on the image selection interface, clicking a confirm button to confirm the image of the first object; the terminal receives the confirmation operation of the image aiming at the first object, and directly and automatically generates and presents the expressions corresponding to the first object in the corresponding 'hot' expression templates set in response to the expression generation instruction triggered by the confirmation operation; at the same time, an image of the first object may also be presented and the selected state of the image for the first object identified.
In some embodiments, the terminal may generate the first expression based on the image of the first object by: identifying an image of a first object to obtain a facial region of the first object; respectively fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set to obtain a first expression of the corresponding expression templates fused with the facial features of the first object; and presenting the obtained first expressions.
In some embodiments, the terminal may fuse the facial region of the first object and the facial region of each expression template by: identifying a facial region of the first subject; performing edge smoothing treatment on the facial area, and performing contrast enhancement treatment on the facial area to obtain a treated facial area; and processing the facial areas of the expression templates in the first expression template set according to the processed facial areas to obtain the first expression of the corresponding expression templates fused with the facial features of the first object.
In practical application, firstly, face recognition technology can be adopted to perform face recognition on the acquired image of the first object to obtain a face region of the first object, and secondly, facial region recognition is performed on the obtained face region to determine a facial region of the first object. Then, the contrast enhancement processing is carried out on the five-sense organ region, specifically, the weight of the region with higher brightness and the unimportant non-five-sense organ region is reduced according to the average brightness of the face and the recognized five-sense organ region, so that the transparency of the non-five-sense organ region is improved, the colors of the region with lower brightness (namely, the shadow is heavier) and the five-sense organ region are deepened, and the transparency is reduced; and then carrying out edge smoothing treatment on the facial area, wherein the edge part of the facial area is added with a transition effect, and the feathering effect is added around the facial area, so that the effect is more natural when the facial area is fused to the expression template.
And after the contrast enhancement processing and the edge smoothing processing are carried out on the facial region, obtaining a processed facial region, and processing the facial region of each expression template in the first expression template set according to the processed facial region, so as to obtain a first expression corresponding to each expression template fused with the facial features of the first object. In practical application, when the facial area of each expression template is processed according to the processed facial area, the processed facial area may be added to the facial area of each expression template; the facial area of each expression template can be replaced by the processed facial area.
For example, referring to fig. 9, fig. 9 is a schematic flow chart of fusion generation of a first expression according to an embodiment of the present application. Here, first, a face region and a facial region of the first subject are recognized, edge smoothing processing is performed on the face region, and contrast enhancement processing is performed on the facial region, so that a processed face region is obtained. When the facial area and the expression template are fused, the facial area in the expression template can be removed first to obtain the expression template without the facial area, and the processed expression template is overlapped with the transparent layer provided with the moving picture key frame to prepare the expression file in the PAG format. And finally, importing the processed facial area of the first object into the facial area of the expression in the PAG format so as to obtain the expression fused with the facial features of the first object.
In some embodiments, the terminal may present an image of the first object by: presenting a first thumbnail of an image of a first object; thus, after the first expression corresponding to the first object is generated, when the first thumbnail is focused, the first expression corresponding to each expression template in the first expression template set is presented.
In practical application, after acquiring the image of the first object, the terminal extracts the face region of the first object from the image of the first object, generates a thumbnail of the image of the first object based on the extracted face region, and presents the thumbnail. Exemplarily, referring to fig. 10, fig. 10 is a schematic diagram of an image of a first object presented through a thumbnail provided by an embodiment of the present application; here, on the right side of the image acquisition function entry "+", an image of the first object (i.e., the extracted face region of the first object) is presented by way of a thumbnail, and when the thumbnail of the first object is selected by the user, the terminal presents the thumbnail of the focused first object in a frame-selected form in response to a user's selection operation for the thumbnail of the first object, and at the same time, presents the expression corresponding to the first object corresponding to each expression template in the first expression template set.
In some embodiments, the terminal may implement the switching of the expressions corresponding to the images of the different objects in the same expression template set by: presenting a second thumbnail of the image of the second object, the second thumbnail being obtained based on the image acquisition function portal; switching the first thumbnail to be focused to the second thumbnail to be focused in response to a focusing instruction for the second thumbnail; responding to an expression generating instruction aiming at the first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set; the second expression is obtained based on fusion of expression templates in the second object image and the first expression template set.
Here, the user may continue to acquire the image of the second object after acquiring the image of the first object through the image acquisition function portal. After the terminal acquires the image of the second object, the image of the second object can be presented in the form of a thumbnail. When a focusing instruction for the second thumbnail triggered by a clicking operation of a user is received, the first thumbnail is switched to the second thumbnail to be focused in response to the focusing instruction, and simultaneously, the image of the second object and each expression template in the first expression template set are fused in response to an expression generating instruction triggered by the focusing operation of the thumbnail for the image of the second object, so that expressions corresponding to each expression template in the first expression template set and corresponding to the second object are obtained and presented. In this way, based on the switching between the thumbnails of different objects, the presentation of expressions corresponding to different objects is realized.
For example, referring to fig. 11, fig. 11 is a schematic diagram showing a first expression and a second expression by switching according to an embodiment of the present application. Here, the first expression template set is a selected "hot" expression template set, and when the terminal obtains an image of the second object through an image obtaining function entry, for example, an image collecting function entry, the terminal presents a second thumbnail of the second object; when a focusing instruction for the second thumbnail triggered by a click operation by a user is received, switching the first thumbnail to the second thumbnail to be focused in response to the focusing instruction, namely, adjusting the frame selection first thumbnail to the frame selection second thumbnail as shown in fig. 11; in this way, the terminal can receive the expression generating instruction triggered by the focusing operation of the image of the second object, fuse the image of the second object with each expression template in the "hot" expression template set, and obtain and present the expression corresponding to the second object in each expression template in the "hot" expression template set.
Specifically, the image thumbnail of the first object in fig. 11 (1) is focused, and at this time, the expression corresponding to the first object is presented, and the facial features of the first object are fused (see the illustration in the dashed box); when the image thumbnail of the second object is focused, the presented expression corresponding to the first object is switched to the presented expression corresponding to the second object, see (2) of fig. 11, and the presented expression corresponding to the second object is fused with the facial features of the second object (see the dashed box).
In some embodiments, the terminal may generate the expression corresponding to each expression template in the second expression template set by: responding to the selection operation for the second expression template set, and presenting each expression template contained in the second expression template set; responding to an expression generating instruction aiming at the second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set; the third expression is obtained based on fusion of the image of the first object and the expression templates in the second expression template set.
Here, after generating and presenting the first expression corresponding to each expression template in the first expression template set, the terminal may also generate the expression corresponding to each expression template in the other expression template sets. When receiving a selection operation of a user for the second expression template set, the terminal responds to the selection operation and presents each expression template contained in the second expression template set; when an expression generating instruction aiming at the second expression template set is received, for example, the expression generating instruction triggered by focusing operation aiming at the image of the first object is used for fusing the image of the first object with each expression template in the second expression template set, so that the expression corresponding to each expression template in the second expression template set and the expression corresponding to the first object are obtained and presented.
For example, referring to fig. 12, fig. 12 is a schematic flow chart of generating each third expression corresponding to the second expression template set according to an embodiment of the present application. Here, the terminal responds to the selection operation of the user for the second expression template set, and presents each expression template contained in the expression template set, wherein each expression template is in an original form; when a focusing operation of a user on an image of a first object is received, an expression generating instruction for a 'strange' expression template set triggered based on the focusing operation is responded, the image of the first object and the expression templates in the 'strange' expression template set are fused, the expressions of the expression templates in the corresponding 'strange' expression template set are obtained and presented, and facial features of the first object are fused with each expression obtained.
By applying the embodiment of the application, the image of the first object is acquired through the image acquisition function entrance in response to the user-triggered expression editing instruction, the image of the first object is fused with the expression templates in the presented expression template set, the expression templates set comprises a plurality of expression templates, and when the expression generation instruction is received, the expression corresponding to each expression template in the expression template set can be generated at one time; therefore, the expressions fused with the user characteristics can be generated in batches, and the interestingness of the chat bucket graph of the user is increased.
The method for generating the expression provided by the embodiment of the application is continuously described below by taking the example that the terminal operates the instant messaging client to generate the expression fused with the user image. Referring to fig. 13, fig. 13 is a flowchart of a method for generating an expression according to an embodiment of the present application, where the method for generating an expression according to an embodiment of the present application includes:
step 1301: the terminal runs the instant messaging client and presents a session interface containing a session function bar.
Here, the terminal is provided with an instant messaging client, and a session interface for a user to input and send a session message is presented by running the instant messaging client, and the session interface comprises a session function bar. Various functional items, such as a voice call functional item, a video call functional item, and the like, are presented in the conversation functional field.
Step 1302: based on the conversation function field, an expression editing instruction is received, and an image acquisition function entry is presented.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with a user image. In practical application, the terminal may trigger the expression editing instruction by: the method comprises the steps that expression editing function items are presented in a conversation function field, and a terminal responds to triggering operation aiming at the expression editing function items and can receive expression editing instructions;
Or, the conversation function bar is provided with an expression adding function item, the terminal responds to the triggering operation aiming at the expression adding function item, an expression selection area containing at least one expression is provided, an expression editing entry is provided in the expression selection area, and an expression editing instruction can be received in response to the triggering operation aiming at the expression editing entry.
After receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function entry for acquiring the user image.
Step 1303: in response to a trigger operation for the image acquisition function portal, an image acquisition interface is presented.
Step 1304: and responding to an image acquisition instruction triggered based on the image acquisition interface, acquiring an image of the first object and synchronizing the image to the server.
Step 1305: and receiving a template set selection operation, and determining the expression template set selected by the template set selection operation as a first expression template set.
Step 1306: and responding to the expression generating instruction triggered by the template set selection operation, and sending the expression generating instruction aiming at the first expression template set to a server.
Step 1307: the server receives an expression generation instruction for the first expression template set, and identifies a facial region of an image of the first object and a five-sense organ region of the facial region of the first object.
Step 1308: and performing contrast enhancement processing on the facial region, and performing edge smoothing processing on the facial region to obtain the processed facial region.
Step 1309: and adding the processed facial regions to the facial regions of the expression templates in the first expression template set to obtain the expressions of the facial features of the first object fused with the corresponding expression templates, and returning to the terminal.
Step 1310: and the terminal receives and presents the expressions of the facial features fused with the first object corresponding to each expression template in the first expression template set.
The method for generating the expression provided by the embodiment of the application is continuously described below by taking the example that the terminal operates the instant messaging client to generate the expression fused with the user image. Referring to fig. 14, fig. 14 is a flowchart of a method for generating an expression according to an embodiment of the present application, where the method for generating an expression according to an embodiment of the present application includes:
step 1401: the terminal runs the instant messaging client and presents a session interface containing a session function bar.
Here, the terminal is provided with an instant messaging client, and a session interface for a user to input and send a session message is presented by running the instant messaging client, and the session interface comprises a session function bar. Various functional items, such as a voice call functional item, a video call functional item, and the like, are presented in the conversation functional field.
Step 1402: based on the conversation function field, an expression editing instruction is received, an image acquisition function entry is presented, and at least one expression template set is presented.
The expression template set comprises at least two expression templates.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with a user image. In practical application, the terminal may trigger the expression editing instruction by: the method comprises the steps that expression editing function items are presented in a conversation function field, and a terminal responds to triggering operation aiming at the expression editing function items and can receive expression editing instructions;
or, the conversation function bar is provided with an expression adding function item, the terminal responds to the triggering operation aiming at the expression adding function item, an expression selection area containing at least one expression is provided, an expression editing entry is provided in the expression selection area, and an expression editing instruction can be received in response to the triggering operation aiming at the expression editing entry.
After receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function entry for acquiring the user image.
Step 1403: and receiving a template set selection operation, and determining the expression template set selected by the template set selection operation as a first expression template set.
Step 1404: in response to a trigger operation for the image acquisition function portal, an image acquisition interface is presented.
Step 1405: and responding to an image acquisition instruction triggered based on the image acquisition interface, acquiring an image of the first object and synchronizing the image to the server.
Step 1406: the terminal presents the image of the first object by means of a thumbnail.
Step 1407: and receiving and responding to the focusing operation of the image of the first object and the triggered expression generating instruction, and sending the expression generating instruction for the first expression template set to a server.
Step 1408: the server receives an expression generation instruction for the first expression template set, and identifies a facial region of an image of the first object and a five-sense organ region of the facial region of the first object.
Step 1409: and performing contrast enhancement processing on the facial region, and performing edge smoothing processing on the facial region to obtain the processed facial region.
Step 1410: and adding the processed facial regions to the facial regions of the expression templates in the first expression template set to obtain the expressions of the facial features of the first object fused with the corresponding expression templates, and returning to the terminal.
Step 1411: and the terminal receives and presents the expressions of the facial features fused with the first object corresponding to each expression template in the first expression template set.
The method for generating the expression provided by the embodiment of the application is continuously described below by taking the example that the terminal operates the instant messaging client to generate the expression fused with the user image. Referring to fig. 15, fig. 15 is a flowchart of a method for generating an expression according to an embodiment of the present application, where the method for generating an expression according to an embodiment of the present application includes:
step 1501: the terminal runs the instant messaging client and presents a session interface containing a session function bar.
Here, the terminal is provided with an instant messaging client, and a session interface for a user to input and send a session message is presented by running the instant messaging client, and the session interface comprises a session function bar. Various functional items, such as a voice call functional item, a video call functional item, and the like, are presented in the conversation functional field.
Step 1502: based on the conversation function field, an expression editing instruction is received, an image acquisition function entry is presented, and at least one expression template set is presented.
The expression template set comprises at least two expression templates.
Here, the expression editing instruction is used for instructing editing and generating a self-timer expression fused with a user image. In practical application, referring specifically to fig. 5A-B, the terminal may receive an expression editing instruction triggered by the user by: the method comprises the steps that expression editing function items are presented in a conversation function field, and a terminal responds to triggering operation aiming at the expression editing function items and can receive expression editing instructions;
Or, the conversation function bar is provided with an expression adding function item, the terminal responds to the triggering operation aiming at the expression adding function item, an expression selection area containing at least one expression is provided, an expression editing entry is provided in the expression selection area, and an expression editing instruction can be received in response to the triggering operation aiming at the expression editing entry.
After receiving the expression editing instruction, the terminal responds to the expression editing instruction and presents an image acquisition function entry for acquiring the user image.
Step 1503: and responding to the triggering operation for the image acquisition function entrance, and presenting an image acquisition interface containing a shooting key.
Here, the image acquisition function portal is an image acquisition function portal, and the terminal presents an image acquisition interface including a photographing key in response to a trigger operation for the image acquisition function portal. The image acquisition interface can also be provided with shooting prompt information for prompting the user to adjust shooting posture, shooting angle, shooting position and the like, see fig. 6A, and the image acquisition interface is provided with a face function box for prompting the user to adjust the shooting position and simultaneously presents character prompts such as 'please face the lens', 'seek the position with better light'.
Meanwhile, in the shooting process of the user, when the face of the user is captured, the image acquisition interface is further provided with an expression frame for displaying the expression preview effect, such as a round-face cat ear expression pendant. When the terminal collects the images of the user, the facial area and the five sense organs area of the face are identified in real time, and the expression frame is added to the corresponding facial area of the user, so that the preview effect of the expression to be generated is displayed. Here, a text prompt or the like prompting the user to make a smiling expression may also be presented.
In practical applications, the image capturing function portal may also be an image selection portal through which the user may enter an image selection interface for the user to select the first object image from an album, which may be a collection of various images stored in the terminal, such as an album, QQ picture, etc. The terminal responds to the triggering operation of the user aiming at the image acquisition function entrance, and presents an image selection interface; when an image selection operation triggered by a user based on the image selection interface is received, the image selected by the image selection operation is taken as the image of the first object. In this embodiment, the selected image is an image having facial features.
Step 1504: and responding to the triggering operation of the shooting key, acquiring and presenting the image of the first object, and synchronizing the image of the first object to the server.
Here, the captured image of the first object may be a video image having facial features.
Step 1505: and receiving a confirmation operation aiming at the acquired image of the first object, responding to an expression generating instruction triggered by the confirmation operation, and sending the expression generating instruction aiming at the first expression template set to a server.
Here, when a confirmation operation of the user for the collected image of the first object is received, that is, the image of the first object collected by default has been focused, in response to an expression generation instruction for the selected first expression template set triggered by the confirmation operation, the expression generation instruction is sent to the background server to notify the background server to generate an expression corresponding to each expression template in the first expression template set based on the image of the first object.
See, for example, fig. 8C. Here, the user first selects a "hot" expression template set as a first expression template set; after the user finishes shooting the image of the first object through the image acquisition interface or finishes selecting the image of the first object based on the image selection interface, clicking a confirm button to confirm the image of the first object; the terminal receives the confirmation operation of the image of the first object, responds to the expression generation instruction triggered by the confirmation operation, and sends the expression generation instruction of the first expression template set to the server so as to inform the server to generate the expression corresponding to each expression template in the first expression template set based on the acquired image of the first object.
Step 1506: the server receives an expression generation instruction for the first expression template set, and identifies a facial region of an image of the first object and a five-sense organ region of the facial region of the first object.
Step 1507: and performing contrast enhancement processing on the facial region, and performing edge smoothing processing on the facial region to obtain the processed facial region.
In practical application, firstly, facial recognition is performed on an acquired image of a first object to obtain a facial region of the first object, and secondly, facial region recognition is performed on the obtained facial region to determine a facial region of the first object. Then, the contrast enhancement processing is carried out on the five-sense organ region, specifically, the weight of the region with higher brightness and the unimportant non-five-sense organ region is reduced according to the average brightness of the face and the recognized five-sense organ region, so that the transparency of the non-five-sense organ region is improved, the colors of the region with lower brightness (heavier shadow) and the five-sense organ region are deepened, and the transparency is reduced; and then carrying out edge smoothing treatment on the facial area, wherein the edge part of the facial area is particularly provided with a transition effect, and the feathering effect is increased around the facial area, so that the effect of fusing into the expression template is more natural.
Step 1508: and adding the processed facial regions to the facial regions of the expression templates in the first expression template set to obtain the expressions of the facial features of the first object fused with the corresponding expression templates, and returning to the terminal.
After the contrast enhancement processing and the edge smoothing processing are carried out on the facial area, the processed facial area is obtained, the processed facial area is respectively added to the facial areas of the expression templates in the first expression template set, and therefore the first expression, which corresponds to the expression templates and is fused with the facial features of the first object, is obtained. Specifically, the processed facial area of the first object may be added to each expression template, such as a PAG template, where the PAG template defines how the size scale, position and angle of the facial area of the first object are fused into each expression template.
The expression template can be a dynamic expression template or a static expression template, so that the fused expression can comprise a dynamic expression or a static expression.
Step 1509: and the terminal receives and presents the expressions of the facial features fused with the first object corresponding to each expression template in the first expression template set.
Here, the user may implement transmission of the expression message by clicking the corresponding expression.
Based on the obtained expression, the expression can be a dynamic expression fused with the facial features of the user, or a static expression fused with the facial features of the user, for example, the expression can be obtained by fusing the facial features of the user into an exaggerated head portrait bucket image expression of the hand drawing.
In practical application, the fused expression can be black and white, can be colored, can also carry interesting characters and the like, so that the experience of a user for carrying out a fight figure in chatting is further improved.
Step 1510: the terminal acquires and presents the image of the second object through the image acquisition function entrance.
Here, the image of the second object may also be presented by way of a thumbnail.
Step 1511: in response to a focus instruction for a thumbnail of a second object, switching the thumbnail of the first object to the thumbnail of the second object to the focus, and sending an expression generating instruction for the second object corresponding to the first expression template set to a server.
Step 1512: the server receives the expression generating instruction, fuses the expression templates in the first expression template set based on the image of the second object and the expression templates in the first expression template set to obtain the expression corresponding to the second object in the first expression template set, and returns the expression to the terminal.
Step 1513: and switching the presented expression corresponding to the first object to the presented expression corresponding to the second object.
Here, based on focus switching between thumbnails for different objects, presentation of expressions corresponding to the different objects is achieved. Referring to fig. 11, here, the image thumbnail of the first object in fig. 11 (1) is focused, and at this time, an expression corresponding to the first object is presented, and the facial features of the first object are fused (see shown in a dashed box); when the image thumbnail of the second object is focused, the presented expression corresponding to the first object is switched to the presented expression corresponding to the second object, see (2) of fig. 11, and the presented expression corresponding to the second object is fused with the facial features of the second object (see the dashed box).
In this way, the bucket expressions with the facial features of the user are generated in batches for use by the user in conversational chat.
Continuing with the description of the expression generating apparatus 355 provided in the embodiments of the present application, in some embodiments, the expression generating apparatus may be implemented by using a software module. Referring to fig. 16, fig. 16 is a schematic structural diagram of an expression generating apparatus 355 according to an embodiment of the present application, where the expression generating apparatus 355 according to the embodiment of the present application includes:
A first presenting module 3551 for presenting an image capturing function entry in response to an expression editing instruction, and presenting at least one expression template set comprising at least two expression templates;
an acquisition module 3552 for acquiring and presenting an image of the first object in response to a trigger operation for the image acquisition function portal;
a second presenting module 3553, configured to present, in response to an expression generating instruction for a first expression template set, a first expression corresponding to each expression template in the first expression template set;
the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
the first receiving module is used for presenting a conversation function column in a conversation interface and presenting expression editing function items in the conversation function column;
and responding to the triggering operation of the expression editing function item, and receiving the expression editing instruction.
In some embodiments, the apparatus further comprises:
the second receiving module is used for presenting a conversation function column in a conversation interface and presenting expression adding function items in the conversation function column;
In response to a triggering operation for the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation of the expression editing entrance, and receiving the expression editing instruction.
In some embodiments, the acquiring module 3552 is further configured to, when the image acquisition function portal is an image acquisition function portal, in response to a trigger operation for the image acquisition function portal, present an image acquisition interface for acquiring an image of a first object, and
presenting shooting keys for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
In some embodiments, the acquiring module 3552 is further configured to present an image acquisition interface, and present at least one of the following in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect;
the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position.
In some embodiments, the acquiring module 3552 is further configured to, when the image acquisition function entry is an image acquisition function entry, present the image acquisition interface including a facial expression frame in response to a trigger operation for the image acquisition function entry, and
presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring an image of the face of the first object based on the facial expression frame.
In some embodiments, the acquisition module 3552 is further configured to, when the image acquisition function entry is an image selection entry, in response to a trigger operation for the image acquisition function entry, present an image selection interface for selecting an image of a first object from an atlas,
and presenting at least two images in the image selection interface;
and responding to an image selection operation triggered based on the image selection interface, and presenting the image selected by the image selection operation as the image of the first object.
In some embodiments, the second presenting module 3553 is further configured to identify an image of the first object, resulting in a facial region of the first object;
Fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression of each expression template corresponding to the facial features of the first object;
and presenting each obtained first expression.
In some embodiments, the second presentation module 3553 is further configured to identify a five sense organ region in a facial region of the first subject;
performing edge smoothing on the face region, and
performing contrast enhancement processing on the five-sense organ region in the face region to obtain a processed face region;
and processing the facial areas of the expression templates in the first expression template set according to the processed facial areas to obtain first expressions of the expression templates, which are fused with the facial features of the first object and correspond to the facial features of the first object.
In some embodiments, the second presenting module 3553 is further configured to receive a template set selection operation, and take the expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generating instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
the third receiving module is used for receiving a template set selection operation, taking the expression template set selected by the template set selection operation as the first expression template set, and controlling the state of the presented first expression template set to be a selected state;
the selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
a fourth receiving module for receiving a focusing operation for an image of the first object;
and responding to the expression generating instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
In some embodiments, the acquisition module 3552 is further configured to present a first thumbnail of an image of the first object;
correspondingly, the second presenting module 3553 is further configured to present, when the first thumbnail is focused, a first expression corresponding to each expression template in the first expression template set.
In some embodiments, the apparatus further comprises:
A third presenting module, configured to present a second thumbnail of an image of a second object, where the second thumbnail is obtained based on the image obtaining function entry;
switching the first thumbnail to be focused to the second thumbnail to be focused in response to a focusing instruction for the second thumbnail;
responding to an expression generating instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set;
the second expression is obtained based on fusion of the image of the second object and the expression templates in the first expression template set.
In some embodiments, the apparatus further comprises:
a fourth presenting module, configured to respond to a selection operation for a second expression template set, and present each expression template included in the second expression template set;
responding to an expression generating instruction aiming at a second expression template set, and presenting a third expression corresponding to each expression template in the second expression template set;
the third expression is obtained based on fusion of the image of the first object and the expression templates in the second expression template set.
The embodiment of the application also provides electronic equipment, which comprises:
A memory for storing executable instructions;
and the processor is used for realizing the expression generating method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application also provides a computer readable storage medium which stores executable instructions, and when the executable instructions are executed by a processor, the expression generating method provided by the embodiment of the application is realized.
In some embodiments, the storage medium may be FRAM, ROM, PROM, EPROM, EE PROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories. The computer may be a variety of computing devices including smart terminals and servers.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for generating expressions, the method comprising:
responding to the expression editing instruction, presenting an image acquisition function entry, and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
Acquiring an image of a first object in response to a triggering operation for the image acquisition function entry, and presenting a first thumbnail of the image of the first object;
when the image acquisition function portal is an image acquisition function portal, an image acquisition interface for acquiring an image of the first object is presented, and at least one of the following is presented in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect; the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position;
responding to an expression generating instruction aiming at a first expression template set, and presenting a first expression corresponding to each expression template in the first expression template set; the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set;
presenting a second thumbnail of an image of a second object;
switching the first thumbnail to be focused to the second thumbnail to be focused in response to a focusing instruction for the second thumbnail;
responding to an expression generating instruction aiming at a first expression template set, and presenting a second expression corresponding to each expression template in the first expression template set; the second expression is obtained based on fusion of the image of the second object and the expression templates in the first expression template set.
2. The method of claim 1, wherein prior to the presenting the image acquisition function portal, the method further comprises:
presenting a conversation function column in a conversation interface, and presenting expression editing function items in the conversation function column;
and responding to the triggering operation of the expression editing function item, and receiving the expression editing instruction.
3. The method of claim 1, wherein prior to the presenting the image acquisition function portal, the method further comprises:
presenting a conversation function column in a conversation interface, and presenting expression adding function items in the conversation function column;
in response to a triggering operation for the expression adding function item, presenting an expression selection area containing at least one expression, and presenting an expression editing entry in the expression selection area;
and responding to the triggering operation of the expression editing entrance, and receiving the expression editing instruction.
4. The method of claim 1, wherein the acquiring and presenting the image of the first object in response to the triggering operation for the image acquisition function portal comprises:
when the image acquisition function portal is an image acquisition function portal, responding to a triggering operation for the image acquisition function portal, presenting an image acquisition interface for acquiring an image of a first object, and
Presenting shooting keys for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring and presenting the image of the first object.
5. The method of claim 1, wherein the acquiring and presenting the image of the first object in response to the triggering operation for the image acquisition function portal comprises:
when the image acquisition function portal is an image acquisition function portal, responding to a triggering operation for the image acquisition function portal, presenting the image acquisition interface containing a facial expression frame, and
presenting a shooting key for image acquisition in the image acquisition interface;
and responding to the triggering operation of the shooting key, and acquiring an image of the face of the first object based on the facial expression frame.
6. The method of claim 1, wherein the acquiring and presenting the image of the first object in response to the triggering operation for the image acquisition function portal comprises:
when the image acquisition function portal is an image selection portal, presenting an image selection interface for selecting an image of a first object from an atlas in response to a trigger operation for the image acquisition function portal,
And presenting at least two images in the image selection interface;
and responding to an image selection operation triggered based on the image selection interface, and presenting the image selected by the image selection operation as the image of the first object.
7. The method of claim 1, wherein the presenting the first expression corresponding to each expression template in the first expression template set comprises:
identifying an image of the first object to obtain a facial region of the first object;
fusing the facial area of the first object with the facial areas of the expression templates in the first expression template set respectively to obtain a first expression of each expression template corresponding to the facial features of the first object;
and presenting each obtained first expression.
8. The method of claim 7, wherein the fusing the facial region of the first subject with the facial region of each expression template in the first expression template set, respectively, to obtain a first expression of each expression template corresponding to the facial feature of the first subject fused, comprises:
identifying a facial region of the first subject;
Performing edge smoothing on the face region, and
performing contrast enhancement processing on the five-sense organ region in the face region to obtain a processed face region;
and processing the facial areas of the expression templates in the first expression template set according to the processed facial areas to obtain first expressions of the expression templates, which are fused with the facial features of the first object and correspond to the facial features of the first object.
9. The method of claim 1, wherein the presenting a first expression corresponding to each expression template in the first set of expression templates in response to an expression generation instruction for the first set of expression templates comprises:
receiving a template set selection operation, and taking the expression template set selected by the template set selection operation as the first expression template set;
and responding to an expression generating instruction triggered by the template set selection operation, and presenting a first expression corresponding to each expression template in the first expression template set.
10. The method of claim 1, wherein after the presenting the at least one expression template set, the method further comprises:
receiving a template set selection operation, taking the expression template set selected by the template set selection operation as the first expression template set, and controlling the state of the presented first expression template set to be a selected state;
The selected state is used for indicating that after the image of the first object is acquired and presented, the image of the first object is fused with each expression template in the first expression template set.
11. The method of claim 1, wherein after the capturing and presenting the image of the first object, the method further comprises:
receiving a focusing operation for an image of the first object;
and responding to the expression generating instruction triggered by the focusing operation, and presenting a first expression corresponding to each expression template in the first expression template set.
12. The method of claim 1, wherein,
the second thumbnail is acquired based on the image acquisition function entry.
13. An expression generating device, characterized in that the device comprises:
the first presentation module is used for responding to the expression editing instruction, presenting an image acquisition function entry and presenting at least one expression template set, wherein the expression template set comprises at least two expression templates;
an acquisition module for acquiring an image of a first object in response to a trigger operation for the image acquisition function entry, and presenting a first thumbnail of the image of the first object; when the image acquisition function portal is an image acquisition function portal, an image acquisition interface for acquiring an image of the first object is presented, and at least one of the following is presented in the image acquisition interface: shooting prompt information and an expression frame for displaying an expression preview effect; the shooting prompt information is used for prompting the first object to adjust at least one of the following information: shooting posture, shooting angle, and shooting position;
The second presentation module is used for responding to an expression generation instruction aiming at a first expression template set and presenting the generated first expression corresponding to each expression template in the first expression template set; the first expression is obtained based on fusion of the image of the first object and an expression template in the first expression template set;
a third rendering module for rendering a second thumbnail of the image of the second object; switching the first thumbnail to be focused to the second thumbnail to be focused in response to a focusing instruction for the second thumbnail; responding to an expression generating instruction aiming at a first expression template set, and presenting a generated second expression corresponding to each expression template in the first expression template set; the second expression is obtained based on fusion of the image of the second object and the expression templates in the first expression template set.
14. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the expression generation method according to any one of claims 1 to 12 when executing the executable instructions stored in the memory.
15. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of generating an expression according to any one of claims 1 to 12.
CN202010378163.9A 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium Active CN111541950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378163.9A CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378163.9A CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111541950A CN111541950A (en) 2020-08-14
CN111541950B true CN111541950B (en) 2023-11-03

Family

ID=71980582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378163.9A Active CN111541950B (en) 2020-05-07 2020-05-07 Expression generating method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111541950B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001872B (en) * 2020-08-26 2021-09-14 北京字节跳动网络技术有限公司 Information display method, device and storage medium
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112083866A (en) * 2020-09-25 2020-12-15 网易(杭州)网络有限公司 Expression image generation method and device
CN112363661B (en) * 2020-11-11 2022-08-26 北京达佳互联信息技术有限公司 Magic expression data processing method and device and electronic equipment
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN112866798B (en) * 2020-12-31 2023-05-05 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
CN115065835A (en) * 2022-05-20 2022-09-16 广州方硅信息技术有限公司 Live-broadcast expression display processing method, server, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Image generation method and device
CN104637078A (en) * 2013-11-14 2015-05-20 腾讯科技(深圳)有限公司 Image processing method and device
WO2016177290A1 (en) * 2015-05-06 2016-11-10 北京蓝犀时空科技有限公司 Method and system for generating and using expression for virtual image created through free combination
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
WO2017152673A1 (en) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Expression animation generation method and apparatus for human face model
CN107578459A (en) * 2017-08-31 2018-01-12 北京麒麟合盛网络技术有限公司 Expression is embedded in the method and device of candidates of input method
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
CN108388557A (en) * 2018-02-06 2018-08-10 腾讯科技(深圳)有限公司 Message treatment method, device, computer equipment and storage medium
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
CN109215007A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of image generating method and terminal device
WO2019015522A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Emoticon image generation method and device, electronic device, and storage medium
WO2019142127A1 (en) * 2018-01-17 2019-07-25 Feroz Abbasi Method and system of creating multiple expression emoticons
EP3531377A1 (en) * 2018-02-23 2019-08-28 Samsung Electronics Co., Ltd. Electronic device for generating an image including a 3d avatar reflecting face motion through a 3d avatar corresponding to a face

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102387570B1 (en) * 2016-12-16 2022-04-18 삼성전자주식회사 Method and apparatus of generating facial expression and learning method for generating facial expression

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104637078A (en) * 2013-11-14 2015-05-20 腾讯科技(深圳)有限公司 Image processing method and device
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Image generation method and device
WO2016177290A1 (en) * 2015-05-06 2016-11-10 北京蓝犀时空科技有限公司 Method and system for generating and using expression for virtual image created through free combination
WO2017152673A1 (en) * 2016-03-10 2017-09-14 腾讯科技(深圳)有限公司 Expression animation generation method and apparatus for human face model
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
WO2019015522A1 (en) * 2017-07-18 2019-01-24 腾讯科技(深圳)有限公司 Emoticon image generation method and device, electronic device, and storage medium
CN107578459A (en) * 2017-08-31 2018-01-12 北京麒麟合盛网络技术有限公司 Expression is embedded in the method and device of candidates of input method
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
WO2019142127A1 (en) * 2018-01-17 2019-07-25 Feroz Abbasi Method and system of creating multiple expression emoticons
CN108388557A (en) * 2018-02-06 2018-08-10 腾讯科技(深圳)有限公司 Message treatment method, device, computer equipment and storage medium
EP3531377A1 (en) * 2018-02-23 2019-08-28 Samsung Electronics Co., Ltd. Electronic device for generating an image including a 3d avatar reflecting face motion through a 3d avatar corresponding to a face
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
CN109215007A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of image generating method and terminal device
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多表情人脸肖像的自动生成;宋红;黄小川;王树良;;电子学报(08);全文 *

Also Published As

Publication number Publication date
CN111541950A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111541950B (en) Expression generating method and device, electronic equipment and storage medium
KR102503413B1 (en) Animation interaction method, device, equipment and storage medium
CN108322832B (en) Comment method and device and electronic equipment
KR20230096043A (en) Side-by-side character animation from real-time 3D body motion capture
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
KR20230107844A (en) Personalized avatar real-time motion capture
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
KR20230107655A (en) Body animation sharing and remixing
CN107111889A (en) Use the method and system of the image of interactive wave filter
WO2023070021A1 (en) Mirror-based augmented reality experience
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
KR102546016B1 (en) Systems and methods for providing personalized video
CN111768478B (en) Image synthesis method and device, storage medium and electronic equipment
CN113330453A (en) System and method for providing personalized video for multiple persons
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN114463470A (en) Virtual space browsing method and device, electronic equipment and readable storage medium
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system
KR101694303B1 (en) Apparatus and method for generating editable visual object
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN113794799A (en) Video processing method and device
KR20230026343A (en) Personalized videos using selfies and stock videos
Jikadra et al. Video calling with augmented reality using WebRTC API
JP2007026088A (en) Model creation apparatus
CN115499672B (en) Image display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027414

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant