CN114880062B - Chat expression display method, device, electronic device and storage medium - Google Patents

Chat expression display method, device, electronic device and storage medium Download PDF

Info

Publication number
CN114880062B
CN114880062B CN202210602226.3A CN202210602226A CN114880062B CN 114880062 B CN114880062 B CN 114880062B CN 202210602226 A CN202210602226 A CN 202210602226A CN 114880062 B CN114880062 B CN 114880062B
Authority
CN
China
Prior art keywords
expression
chat
identifier
candidate
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210602226.3A
Other languages
Chinese (zh)
Other versions
CN114880062A (en
Inventor
薛源
沈姿绮
沈其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210602226.3A priority Critical patent/CN114880062B/en
Publication of CN114880062A publication Critical patent/CN114880062A/en
Application granted granted Critical
Publication of CN114880062B publication Critical patent/CN114880062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The application provides a chat expression display method, chat expression display equipment, electronic equipment and a storage medium, wherein the chat expression display method comprises the following steps: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again; and generating a composite expression identifier based on the chat expression identifier in response to receiving the operation instruction for the operation candidate. According to the application, on the basis of the chat expression identifiers sent by the chat objects, the interaction between the expression identifiers is increased in a mode of generating the corresponding composite expression identifiers according to the corresponding operation instructions, so that the expression effect of the users when communicating by using the expression identifiers is improved, and the user experience of the users when carrying out chat communication is improved.

Description

Chat expression display method, device, electronic device and storage medium
Technical Field
The present application relates to the field of computer applications, and in particular, to a chat expression display method, a chat expression display device, an electronic device, and a storage medium.
Background
Instant messaging (Instant Messaging, IM) tools on intelligent terminals are rapidly developed, and more convenient and rich communication modes than short messages and multimedia messages are brought to users. In an IM tool on a mobile terminal, expressions such as "magic expression", "emoji" or "interesting expression" are important message forms. Meanwhile, the introduction of the expression symbol on the social platform increases the interestingness and individuation display, and is more and more popular and touted by users.
However, when the existing users communicate, the transmitted expression packages are mutually separated, expression identifiers transmitted by the users are mutually independent, and the users lack operational interaction interests based on the content of the expression packages.
Disclosure of Invention
In view of this, the application provides a chat expression display method, a device, an electronic device and a storage medium, so as to increase the operation interactivity of expression identifiers among users and improve the user experience of the users.
Based on the above object, the present application provides a chat expression display method, including:
receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again;
And generating a composite expression identifier based on the chat expression identifier in response to receiving an operation instruction for the operation candidate.
In some embodiments, the operation candidates include expression fit candidates;
the generating a composite expression identifier based on the chat expression identifier comprises the following steps:
and inserting a preset expression mark at one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In some implementations, the graphical user interface includes an expression candidate region including a plurality of candidate expression identities therein;
the method comprises the following steps:
and responding to a selection operation aiming at a first candidate expression mark, and replacing the preset expression mark with the first candidate expression mark determined through the selection operation.
In some embodiments, after the preset expression identifier is inserted into one side of the chat expression identifier, the method further includes:
and in response to receiving a drag operation on the preset expression mark, adjusting the position, the size and/or the rotation angle of the preset expression mark according to the drag operation.
In some implementations, the generating a composite expression signature including at least two expression signatures further includes:
Generating and displaying hidden options corresponding to each expression identifier;
and responding to the selection operation of the hidden options, and carrying out hiding processing on the expression identifiers corresponding to the hidden options in the composite expression identifiers.
In some embodiments, the operational candidates include recording candidates;
the generating a composite expression identifier based on the chat expression identifier comprises the following steps:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression characteristic data of a face image of a user; and generating a first facial makeup map according to the first expression characteristic data, and mapping the first facial makeup map to the model to generate the composite expression mark.
In some embodiments, the acquiring the first expression feature data of the face image of the user includes:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the duration of the recording instruction or within the time indicated by the recording instruction.
In some embodiments, the method further comprises:
and continuously acquiring sound data of the user so as to load the sound data into the compound expression mark.
In some embodiments, the chat emotion identification is determined by:
responding to the selection operation of the chat object on at least one second candidate expression mark, and determining the selected second candidate expression mark as a target expression mark;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression characteristic data, and mapping the second facial makeup map to a model of a target expression mark to generate the chat expression mark.
In some implementations, the at least one second candidate expression identity is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In some embodiments, the method further comprises:
and acquiring text information input by a user, and adding the text information into the composite expression identifier.
Based on the same conception, the application also provides chat expression display equipment, which comprises:
The system comprises a determining module, a processing module and a processing module, wherein the determining module is used for receiving a chat expression identifier sent by a chat object, displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again;
and the generating module is used for generating a composite expression identifier based on the chat expression identifier in response to receiving the operation instruction for the operation candidate.
Based on the same conception, the application also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of the above when executing the program.
Based on the same conception, the present application also provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to implement the method as described in any one of the above.
From the above, the chat expression display method, device, electronic device and storage medium provided by the application comprise: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again; and generating a composite expression identifier based on the chat expression identifier in response to receiving the operation instruction for the operation candidate. According to the application, on the basis of the chat expression identifiers sent by the chat objects, the interaction between the expression identifiers is increased in a mode of generating the corresponding composite expression identifiers according to the corresponding operation instructions, so that the expression effect of the users when communicating by using the expression identifiers is improved, and the user experience of the users when carrying out chat communication is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only embodiments of the present application and that other drawings can be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a flow chart of a chat expression display method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a scenario in which a chat object according to an embodiment of the present application sends a chat expression identifier;
fig. 3 is a schematic view of a scene of inserting a preset expression identifier according to an embodiment of the present application;
fig. 4 is a schematic view of a scenario in which a sending end and a receiving end according to an embodiment of the present application receive a composite expression identifier;
FIG. 5 is a schematic view of a scene in which corresponding candidate expression identifiers are matched after text information is input according to the embodiment of the application;
fig. 6 is a schematic view of a recorded scene of a composite expression identifier according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a chat expression display apparatus according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the present specification will be further described in detail below with reference to the accompanying drawings.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements, articles, or method steps preceding the word are included in the listed elements, articles, or method steps following the word, and equivalents thereof, without precluding other elements, articles, or method steps. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, in the chat interaction function of the current social platform or the application program, when the user is ready to communicate with each other by using the expression, the user clicks an icon of the expression library, so as to pop up a selection frame of the expression, the user selects the expression in the selection frame, and after completing the selection, the user clicks to send the selected expression. Along with the increasing popularization and higher utilization rate of network social contact, the expressions used by users are diversified, and meanwhile, users are more and more used to express simpler words by using expressions, for example, the users directly express words such as happy words or haha words by smiling expressions. However, as the user uses the expressions more frequently, the problem of interactivity and relativity between the expressions sent by the user is more and more prominent, for example, when the chat object sends a plurality of expressions, the user sends one expression, but the expression is one of the plurality of expressions replying to the chat, which is uncertain, so that the continuity of ideological expressions of the expression interaction is hindered, the interactivity between the expressions is lacking, and the user experience of the user is reduced.
In combination with the above practical situation, the embodiment of the application provides a chat expression display scheme. According to the application, on the basis of the chat expression identifiers sent by the chat objects, the interaction between the expression identifiers is increased in a mode of generating the corresponding composite expression identifiers according to the corresponding operation instructions, so that the expression effect of the users when communicating by using the expression identifiers is improved, and the user experience of the users when carrying out chat communication is improved.
As shown in fig. 1, a flow chart of a chat expression display method according to the present application is shown, and the method specifically includes:
step 101, receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate for the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again.
In this step, the chat object is the object user that interacts with the current user in the current graphical user interface, which may be one user or multiple users, and the chat expression identifier is a static or dynamic picture expression. In practice, a plurality of still pictures may be sequentially displayed in a certain order for a moving picture. And then, the graphical user interface is provided with a window interface which can be used for inputting and displaying characters, images and the like, such as a chat window for the user to communicate with other chat objects in a network, and the window interface can be used for displaying chat expression identifiers.
When the operation candidate of the chat expression identifier is determined, the specific chat expression identifier can be determined, the determining process can be an identifying process, one or more chat expression identifiers sent by the chat object can be identified when the chat is performed, the specific expression form of the chat expression identifier can be determined, for example, whether the chat expression identifier is a static picture or a dynamic picture GIF or the like, or whether the chat expression identifier is a custom expression identifier generated by real-time recording or the like. In a specific embodiment, the custom expression identifier generated by real-time recording, such as a custom pseudo-me expression identifier, provides a plurality of 3D head portrait models for the user to select, captures real facial expression changes of the user through an image capturing device such as a camera after the selection, and then generates a mapping map according to the changes to map to the 3D head portrait model, so that the 3D head portrait model can make facial expression changes similar to or consistent with the expression changes just recorded by the user, thereby finally generating the custom dynamic expression identifier. These expression identities may then be stored in a pre-established expression identity library in some embodiments. And finally, displaying corresponding operation candidates on the basis of the chat expression identifiers according to the determined chat expression identifiers, wherein the operation candidates are options of operations which can be performed for the chat expression identifiers, such as operation candidates for continuously adding expression identifiers on the current chat expression identifiers, or making new expression identifiers on the basis of modeling or mapping of the current expression, and the like. In particular embodiments, the operation candidates may be insert operations, record operations, modify operations, and so forth. Meanwhile, the operation candidates corresponding to each chat expression identifier or each type of chat expression identifier may be preset, for example, the still picture expression identifier corresponds to an inserting operation, a modifying operation, etc., and the dynamic expression identifier corresponds to an inserting operation, a recording operation, etc. In a specific embodiment, as shown in fig. 2, after the chat expression identifier sent by the chat object is identified, the display position of the operation candidate may be specifically set according to a specific application scenario, for example, set under the chat expression identifier, under the chat frame of the chat object, on one side, and so on.
And 102, generating a composite expression identifier based on the chat expression identifier in response to receiving the operation instruction for the operation candidate.
In this step, since the previous step has already displayed operation candidates of the chat expression identifier, the user may select these operation candidates, so as to generate a corresponding operation instruction, thereby generating a composite expression identifier based on the chat expression identifier. In some optional embodiments, the compound expression identifier may be a compound expression identifier formed by stitching at least two expression identifiers, where the compound expression identifier includes a chat expression identifier sent by a chat object; or the new dynamic expression obtained by carrying out model determination according to the chat expression identifier (such as dynamic image expression) sent by the chat object and then fusing the new dynamic expression with the current expression characteristic of the user is compounded by the model of the chat expression identifier sent by the chat object and the expression characteristic of the current user.
In a specific embodiment, taking the splicing to form a composite expression identifier including two expression identifiers as an example, as shown in fig. 3, the composite expression identifier is generated after a user performs an adding operation (represented by a sticker in fig. 3) after receiving a chat expression identifier. The generated composite expression marks can be formed by arranging two or more expression marks side by side, or can be alternatively arranged according to the sequence of sending the expression marks, for example, the expression marks are expressed in a left-right mode. Thereafter, in some embodiments, to further highlight the sequence of expressions, the expression in the previous compound expression may be slightly above the expression in the next compound expression according to the sequence. In some embodiments, the added expression in the composite expression identifier may be adjusted, and the user may select the expression identifier that wants to be added to the composite expression identifier in the own expression library, and thereby generate the composite expression identifier. The expression identifier added into the composite expression identifier can be the same type of expression identifier as the chat expression identifier, or can be different type of expression identifier from the chat expression identifier, for example, the chat expression identifier is a static picture expression identifier, and the expression identifier newly added into the composite expression identifier can be a dynamic picture expression identifier, so that the composite expression identifier formed by a static picture and a dynamic picture is formed.
In some embodiments, after a basic composite expression identifier is generated, the user may further adjust the composite expression identifier, and in some embodiments, the position, size, orientation, and other attributes of each expression identifier in the composite expression identifier may be adjusted, so that the combined composite expression identifier is more personalized. In a specific embodiment, the whole compound expression mark and each expression mark forming the compound expression mark can be adjusted by dragging, setting an operation button and the like. The composite expression identifier may contain the chat expression identifier sent before the chat object or be applicable to the same dynamic model as the prior chat expression identifier, so that the composite expression identifier is more targeted, and after the chat object receives the composite expression identifier, the chat object can see at a glance which expression the composite expression identifier is used for interaction. Therefore, the meaning of the expression mark sent by the user can be better understood, and the user experience is improved.
And finally, outputting the composite expression mark, and enabling the user and the chat object to continue to manufacture the composite expression mark on the basis of the composite expression mark. As shown in fig. 4a, the compound expression identifier displayed for the transmitting end (i.e. the end used by the user) can be seen that the user can also continue to operate based on the compound expression identifier; as shown in fig. 4b, the composite expression identifier displayed for the receiving end (i.e. the end used by the chat object) can be seen that the chat object can also use the composite expression identifier as an integral chat expression identifier to continue to perform similar operations.
And then outputting the composite expression mark, and storing, displaying, using or reprocessing the adjusted composite expression mark. The specific output mode of the compound expression mark can be flexibly selected according to different application scenes and implementation requirements.
For example, for an application scenario in which the method of the present embodiment is executed on a single device, the composite expression identifier may be directly output in a display manner on a display component (display, projector, etc.) of the current device, so that an operator of the current device can directly see the content of the composite expression identifier from the display component.
For another example, for an application scenario executed by the method of the embodiment on a system formed by a plurality of devices, the compound expression identifier may be sent to other preset devices serving as the receiving party in the system, that is, the synchronization terminal, through any data communication manner (such as wired connection, NFC, bluetooth, wifi, cellular mobile network, etc.), so that the synchronization terminal may perform subsequent processing on the synchronization terminal. Optionally, the synchronization terminal may be a preset server, where the server is generally disposed in a cloud, and is used as a data processing and storage center, and can store and distribute the composite expression identifier; the distributed receivers are terminal devices, and the owners or operators of the terminal devices can be current users, chat objects, expression library administrators of chat modules in a social platform or an application program, supervisory personnel of the chat modules in the social platform or the application program, and the like.
For another example, for an application scenario executed by the method of the present embodiment on a system formed by a plurality of devices, the compound expression identifier may be directly sent to a preset terminal device through an arbitrary data communication manner, where the terminal device may be one or more of the foregoing paragraph lists.
From the above, it can be seen that a chat expression display method according to the embodiment of the present application includes: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again; and generating a composite expression identifier based on the chat expression identifier in response to receiving the operation instruction for the operation candidate. According to the application, on the basis of the chat expression identifiers sent by the chat objects, the interaction between the expression identifiers is increased in a mode of generating the corresponding composite expression identifiers according to the corresponding operation instructions, so that the expression effect of the users when communicating by using the expression identifiers is improved, and the user experience of the users when carrying out chat communication is improved.
It should be noted that, the method of the embodiment of the present application may be performed by a single device, for example, a terminal or a server. The method of the embodiment of the application can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the method of an embodiment of the present application, the devices interacting with each other to accomplish the method. The terminals may include various types of user terminals such as notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (e.g., mobile phones, portable music players, personal digital assistants, dedicated messaging devices, portable gaming devices), or combinations of any two or more of these data processing devices.
It should be noted that the foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In an alternative exemplary embodiment, the operation candidate includes an expression fit candidate; the generating a composite expression identifier based on the chat expression identifier comprises the following steps: and inserting a preset expression mark at one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In this embodiment, when the operation candidate is an expression fitting candidate, the user may generate the composite expression identifier by inserting a preset expression identifier on the basis of the chat expression identifier. As shown in fig. 3, the preset expression mark is inserted in a manner of being arranged at one side of the chat expression mark, and the insertion position can be the left side or the right side of the chat expression mark, etc. In a specific embodiment, in a general chat window, text, expression and other information sent by a chat object are arranged on the left side of a graphical user interface (i.e. the chat window), the content sent by the user is generally arranged on the right side of the chat window, so that the chat expression identifiers in the composite expression identifiers can be arranged on the left side, and the new expression identifiers added are arranged on the right side. Of course, the specific setting position can be specifically set according to the specific application scenario.
In an alternative exemplary embodiment, the graphical user interface includes an expression candidate region including a plurality of candidate expression identities therein; the method comprises the following steps: and acquiring a selection operation of responding to a user aiming at the first candidate expression mark, and replacing the preset expression mark with the first candidate expression mark which is determined to be selected by the user through the selection operation.
In this embodiment, since the expression identifier set by the user is generally added to generate the composite expression identifier, the expression identifier may be an expression identifier of the template itself or a default expression identifier (i.e. a preset expression identifier), and the user may also perform custom selection on the expression identifier. As shown in fig. 3, after the user clicks the option of attaching the expression (the label of the sticker in fig. 3), a setting frame may be directly popped up, which includes the chat expression identifier and a preset expression identifier, where the preset expression identifier may be a fixed expression identifier, or may be an expression identifier randomly selected in the expression library of the current user, or may be represented by a frame body. Then, the expression library of the user can be directly popped up to generate an expression candidate area, and all expression identifiers in the expression library of the user can be arranged in the expression candidate area, wherein the expression identifiers are candidate expression identifiers. When a user selects one expression mark, the expression mark is used as a first candidate expression mark to directly replace the original preset expression mark, so that a composite expression mark is generated.
In an alternative exemplary embodiment, after inserting a preset expression identifier on one side of the chat expression identifier, the method further includes: and in response to receiving a drag operation on the preset expression mark, adjusting the position, the size and/or the rotation angle of the preset expression mark according to the drag operation. Therefore, the generated composite expression mark is more personalized, and the user experience is improved.
In this embodiment, the composite expression identifier may be adjusted, which may be adjusting the whole of the composite expression identifier or adjusting each expression identifier in the composite expression identifier. In a specific embodiment, in order to make the interactivity more obvious, the chat expression identifier sent by the chat object is generally not adjusted, so that the chat object can quickly recognize which expression identifier the composite expression identifier is for interacting after receiving the composite expression identifier. And then, generally, only the newly added preset expression mark or the expression mark to be added selected by the user is adjusted, the preset expression mark can be adjusted through the dragging operation of external equipment such as a touch control device or a mouse, and the specific properties such as the position, the size and/or the rotation angle of the preset expression mark can be adjusted.
In an alternative exemplary embodiment, the generating a composite expression signature including at least two expression signatures further includes: generating and displaying hidden options corresponding to each expression identifier; and responding to the selection operation of the hidden options, and carrying out hiding processing on the expression identifiers corresponding to the hidden options in the composite expression identifiers.
In this embodiment, if the chat emotion identifier sent by the chat object is a composite emotion identifier, the chat emotion identifier may already include multiple emotion identifiers. The chat expression mark can be attached again, however, if the prior chat expression mark already contains excessive expression marks, the newly generated composite expression mark is too redundant, and the meaning expression of the user is inconvenient. Therefore, in order to make the expression concise, part of expression identifiers in the chat expression identifiers can be hidden by setting a hiding option, each expression identifier in the chat expression identifiers can correspond to one hiding option, and one hiding option controls whether the corresponding expression identifier is hidden or not.
In an alternative exemplary embodiment, the operation candidates include recording candidates; the generating a composite expression identifier based on the chat expression identifier comprises the following steps: determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression characteristic data of a face image of a user; and generating a first facial makeup map according to the first expression characteristic data, and mapping the first facial makeup map to the model to generate the composite expression mark.
In this embodiment, the chat expression identifier may also be a dynamic recording expression identifier, for example, virtual expressions such as "pseudo-me expression" and AR expression in the specific embodiment, where these expression identifiers firstly provide some basic models, for example, "monkey head", "rabbit head" models, etc., and then use functions of camera shooting carried by the terminal to obtain real-time facial expression features of the user, then use the recorded facial expression features to make a map corresponding to the basic model, and finally map the map to the basic model, so that the basic model can make facial actions consistent or similar to the facial expression just recorded by the user, so as to generate the corresponding expression identifier. Furthermore, as shown in fig. 6, in a specific embodiment, if the chat object sends a dynamic record expression identifier of "monkey head" as shown in the figure, the system can directly identify the model, and when the user selects a record candidate function, the system can directly call the model and record a face image, so as to obtain feature data of the face image, namely, first expression feature data, and the expression change of the current user can be directly reflected by using the first expression feature data. And further, facial makeup maps corresponding to the 'monkey head' basic model, namely first facial makeup maps, can be generated through the first expression characteristic data. Finally, the first facial makeup map is mapped to the basic model, so that a final real-time dynamic expression is generated, and the production of the composite expression mark is completed. Of course, in a specific embodiment, if the user wants to directly send a dynamic recording expression identifier, a basic model can be directly selected in a similar manner, then facial features of the user are obtained, and a map is generated and mapped onto the basic model, so that a real-time dynamic expression identifier is finally generated.
In an alternative exemplary embodiment, the acquiring the first expression feature data of the face image of the user includes: and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the duration of the recording instruction or within the time indicated by the recording instruction.
In this embodiment, since the recording of the first expression feature data is a continuous process, when recording is started, recording may be performed by setting the recording time, for example, setting the recording time to be 10 seconds, then performing face capturing for 10 seconds, or by always triggering the recording button, when the user presses the recording button, recording is started until the user lifts the recording button to end recording, and during this time, the user always presses the recording button to trigger recording. Naturally, the pressing mode may be pressing by touch control or pressing by an external device such as a mouse.
In an alternative exemplary embodiment, the method further comprises: and continuously acquiring sound data of the user so as to load the sound data into the compound expression mark. Therefore, the composite expression mark can send out real-time voice of the user, and user experience of expression use is improved.
In this embodiment, the user voice may be collected synchronously during the recording process of the user, and the collected voice may be added to the composite expression identifier, so that the composite expression identifier may not only convey the real-time expression of the user, but also convey the real-time voice of the user. And the user experience and the interactivity of expression use are improved.
In an alternative exemplary embodiment, the chat expression identification is determined by: responding to the selection operation of the chat object on at least one second candidate expression mark, and determining the selected second candidate expression mark as a target expression mark; responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object; and generating a second facial makeup map according to the second expression characteristic data, and mapping the second facial makeup map to a model of a target expression mark to generate the chat expression mark.
In this embodiment, in the chat communication including the expression identifier, the chat object is necessarily provided with an expression identifier library, and the chat object may select a corresponding expression identifier in the expression identifier library to send the expression identifier. The expression identifier is identified by the system as the receiver, and the identification process can be based on the expression identifier library of the system, so that the system can be regarded as a chat object to select a second candidate expression identifier, and the selected second candidate expression identifier can be used as a target expression identifier. Then, if the target expression is identified as a static or dynamic picture expression, the system as the receiving party can directly generate the operation candidate of the lamination candidate. If the target expression identifier is a fact dynamic expression identifier, the chat object must perform editing operations such as further expression recording on the basis of the target expression identifier, namely, the second expression characteristic data of the chat object are acquired in response to the editing operation of the chat object on the target expression identifier. And further, generating the facial makeup map of the basic model corresponding to the target expression mark through the second expression characteristic data, namely, the second facial makeup map. Finally, the second facial makeup map is mapped to the basic model, so that the final real-time dynamic expression is generated, and the chat expression mark is manufactured.
In an alternative exemplary embodiment, the at least one second candidate expression identity is determined by: acquiring text information input by the chat object in a session input box; and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In this embodiment, since expression identifiers in the user expression library are various, and as expression identifiers used by the user increase, how the user selects if all the expression identifiers are displayed to the user is also a problem to be solved. The matching and screening of expression marks can be performed by means of word association. In a specific embodiment, a corresponding keyword or keyword can be added to each expression identifier, each expression identifier can be associated with a plurality of keywords or keywords, then when a user selects, the expression identifier associated with the keyword can be displayed in a keyword input mode (namely text information is input), so that the expression identifiers are screened and filtered, and the user can conveniently set the expression identifiers. As shown in fig. 5, after the user inputs "good happiness" in the session input box, the expression identifiers in the expression library are searched, and only all the expression identifiers associated with "good happiness" are displayed for the user to select. These expression identifications may be still picture expression identifications, moving picture expression identifications, custom expression identifications, and so on. Of course, in a specific embodiment, if the user wants to send an expression identifier directly, text information (the text information includes a keyword) can also be input in a session input box in the graphical user interface in a similar manner, and then the expression identifier related to the keyword is popped up directly.
In an alternative exemplary embodiment, the method further comprises: and acquiring text information input by a user, and adding the text information into the composite expression identifier. And adding characters for the composite expression mark to generate personalized expressions.
In this embodiment, after the compound expression identifier is generated, a text description may also be added to the compound expression identifier. After the text information input by the user is obtained, the text information can be added into the composite expression mark in the form of a picture, and the size, the position, the rotation angle and the like of the text information can be adjusted during adding. Therefore, the composite expression mark meets the personalized requirements of the user.
Based on the same conception, the application also provides chat expression display equipment corresponding to the method of any embodiment.
Referring to fig. 7, the chat expression presentation apparatus includes:
a determining module 210, configured to receive a chat expression identifier sent by a chat object, and display the chat expression identifier and an operation candidate for the chat expression identifier in a graphical user interface, where the operation candidate is used to edit the received chat expression identifier again;
And the generating module 220 is configured to generate a composite expression identifier based on the chat expression identifier in response to receiving an operation instruction for the operation candidate.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present application.
The device of the foregoing embodiment is configured to implement the corresponding chat expression display method in the foregoing embodiment, and has the beneficial effects of the corresponding chat expression display method embodiment, which is not described herein again.
In an alternative exemplary embodiment, the operation candidate includes an expression fit candidate;
the generating module 220 is further configured to:
and inserting a preset expression mark at one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In an alternative exemplary embodiment, the graphical user interface includes an expression candidate region including a plurality of candidate expression identities therein;
the generating module 220 is further configured to:
and responding to a selection operation aiming at a first candidate expression mark, and replacing the preset expression mark with the first candidate expression mark determined through the selection operation.
In an alternative exemplary embodiment, the generating module 220 is further configured to:
and in response to receiving a drag operation on the preset expression mark, adjusting the position, the size and/or the rotation angle of the preset expression mark according to the drag operation.
In an alternative exemplary embodiment, the generating module 220 is further configured to:
generating and displaying hidden options corresponding to each expression identifier;
and responding to the selection operation of the hidden options, and carrying out hiding processing on the expression identifiers corresponding to the hidden options in the composite expression identifiers.
In an alternative exemplary embodiment, the operation candidates include recording candidates;
the generating module 220 is further configured to:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression characteristic data of a face image of a user; and generating a first facial makeup map according to the first expression characteristic data, and mapping the first facial makeup map to the model to generate the composite expression mark.
In an alternative exemplary embodiment, the generating module 220 is further configured to:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the duration of the recording instruction or within the time indicated by the recording instruction.
In an alternative exemplary embodiment, the generating module 220 is further configured to:
and continuously acquiring sound data of the user so as to load the sound data into the compound expression mark.
In an alternative exemplary embodiment, the chat expression identification is determined by:
responding to the selection operation of the chat object on at least one second candidate expression mark, and determining the selected second candidate expression mark as a target expression mark;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression characteristic data, and mapping the second facial makeup map to a model of a target expression mark to generate the chat expression mark.
In an alternative exemplary embodiment, the at least one second candidate expression identity is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In an alternative exemplary embodiment, the generating module 220 is further configured to:
and acquiring text information input by a user, and adding the text information into the composite expression identifier.
Based on the same conception, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the chat expression display method of any embodiment is realized when the processor executes the program.
Fig. 8 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the chat expression display method corresponding to any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same conception, the application also provides a non-transitory computer readable storage medium corresponding to the method of any embodiment, wherein the non-transitory computer readable storage medium stores computer instructions for causing the computer to execute the chat expression display method according to any embodiment.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the chat expression presentation method described in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalent substitutions, improvements, and the like, which are within the spirit and principles of the embodiments of the application, are intended to be included within the scope of the application.

Claims (13)

1. A chat expression presentation method, comprising:
receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again, and the operation candidate at least comprises an expression laminating candidate;
generating a composite expression identifier based on the chat expression identifier in response to receiving an operation instruction for the operation candidate;
When the operation candidate is the expression fitting candidate, generating a composite expression identifier based on the chat expression identifier, including:
and inserting a preset expression mark at one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
2. The method of claim 1, wherein the graphical user interface comprises an expression candidate region comprising a plurality of candidate expression identities therein;
the method comprises the following steps:
and responding to a selection operation aiming at a first candidate expression mark, and replacing the preset expression mark with the first candidate expression mark determined through the selection operation.
3. The method of claim 1, wherein after inserting a preset expression signature on one side of the chat expression signature, further comprising:
and in response to receiving a drag operation on the preset expression mark, adjusting the position, the size and/or the rotation angle of the preset expression mark according to the drag operation.
4. The method of claim 1, wherein the generating a composite expression signature comprising at least two expression signatures, further comprises:
Generating and displaying hidden options corresponding to each expression identifier;
and responding to the selection operation of the hidden options, and carrying out hiding processing on the expression identifiers corresponding to the hidden options in the composite expression identifiers.
5. The method of claim 1, wherein the operational candidates comprise recording candidates;
the generating a composite expression identifier based on the chat expression identifier comprises the following steps:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression characteristic data of a face image of a user; and generating a first facial makeup map according to the first expression characteristic data, and mapping the first facial makeup map to the model to generate the composite expression mark.
6. The method of claim 5, wherein the acquiring the first expression feature data of the face image of the user comprises:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the duration of the recording instruction or within the time indicated by the recording instruction.
7. The method of claim 6, wherein the method further comprises:
And continuously acquiring sound data of the user so as to load the sound data into the compound expression mark.
8. The method of claim 1, wherein the chat expression identification is determined by:
responding to the selection operation of the chat object on at least one second candidate expression mark, and determining the selected second candidate expression mark as a target expression mark;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression characteristic data, and mapping the second facial makeup map to a model of a target expression mark to generate the chat expression mark.
9. The method of claim 8, wherein the at least one second candidate expression identity is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
10. The method according to claim 1, wherein the method further comprises:
And acquiring text information input by a user, and adding the text information into the composite expression identifier.
11. A chat expression presentation apparatus comprising:
the method comprises the steps of determining a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate for the chat expression identifier in a graphical user interface, wherein the operation candidate is used for editing the received chat expression identifier again and at least comprises an expression fitting candidate;
the generation module is used for generating a composite expression identifier based on the chat expression identifier in response to receiving an operation instruction for the operation candidate;
when the operation candidate is the expression fitting candidate, generating a composite expression identifier based on the chat expression identifier, including:
and inserting a preset expression mark at one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 10 when the program is executed by the processor.
13. A non-transitory computer readable storage medium storing computer instructions for causing the computer to implement the method of any one of claims 1 to 10.
CN202210602226.3A 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium Active CN114880062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602226.3A CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602226.3A CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114880062A CN114880062A (en) 2022-08-09
CN114880062B true CN114880062B (en) 2023-11-14

Family

ID=82679050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602226.3A Active CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114880062B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115269886A (en) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 Media content processing method, device, equipment and storage medium
CN116996467A (en) * 2022-08-16 2023-11-03 腾讯科技(深圳)有限公司 Interactive expression sending method and device, computer medium and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777891A (en) * 2014-02-26 2014-05-07 全蕊 Method for sending message by inserting an emoticon in message ending
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110336733A (en) * 2019-04-30 2019-10-15 上海连尚网络科技有限公司 A kind of method and apparatus that expression packet is presented
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, equipment and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112463003A (en) * 2020-11-24 2021-03-09 维沃移动通信有限公司 Picture display method and device, electronic equipment and storage medium
CN112866475A (en) * 2020-12-31 2021-05-28 维沃移动通信有限公司 Image sending method and device and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777891A (en) * 2014-02-26 2014-05-07 全蕊 Method for sending message by inserting an emoticon in message ending
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110336733A (en) * 2019-04-30 2019-10-15 上海连尚网络科技有限公司 A kind of method and apparatus that expression packet is presented
WO2020221104A1 (en) * 2019-04-30 2020-11-05 上海连尚网络科技有限公司 Emoji packet presentation method and equipment
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, equipment and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112463003A (en) * 2020-11-24 2021-03-09 维沃移动通信有限公司 Picture display method and device, electronic equipment and storage medium
CN112866475A (en) * 2020-12-31 2021-05-28 维沃移动通信有限公司 Image sending method and device and electronic equipment

Also Published As

Publication number Publication date
CN114880062A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN111243632B (en) Multimedia resource generation method, device, equipment and storage medium
CN109819313B (en) Video processing method, device and storage medium
CN109120866B (en) Dynamic expression generation method and device, computer readable storage medium and computer equipment
CN107172497B (en) Live broadcasting method, apparatus and system
US11653069B2 (en) Subtitle splitter
US11876770B2 (en) UI and devices for ranking user generated content
CN114880062B (en) Chat expression display method, device, electronic device and storage medium
EP3195601B1 (en) Method of providing visual sound image and electronic device implementing the same
CN111246300B (en) Method, device and equipment for generating clip template and storage medium
CN110061900B (en) Message display method, device, terminal and computer readable storage medium
CN108270794B (en) Content distribution method, device and readable medium
CN104133956A (en) Method and device for processing pictures
CN111711838B (en) Video switching method, device, terminal, server and storage medium
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN112445395A (en) Music fragment selection method, device, equipment and storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
CN111628925B (en) Song interaction method, device, terminal and storage medium
CN112417180A (en) Method, apparatus, device and medium for generating album video
WO2024022473A1 (en) Method for sending comment in live-streaming room, method for receiving comment in live-streaming room, and related device
CN116016817A (en) Video editing method, device, electronic equipment and storage medium
CN113377976B (en) Resource searching method and device, computer equipment and storage medium
KR102192027B1 (en) Method for providing contents based on inference engine and electronic device using the same
CN107135087B (en) Information interaction method and terminal and computer storage medium
CN114489559A (en) Audio playing method, audio playing processing method and audio playing processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant