CN116246310A - Method and device for generating target conversation expression - Google Patents

Method and device for generating target conversation expression Download PDF

Info

Publication number
CN116246310A
CN116246310A CN202111491880.3A CN202111491880A CN116246310A CN 116246310 A CN116246310 A CN 116246310A CN 202111491880 A CN202111491880 A CN 202111491880A CN 116246310 A CN116246310 A CN 116246310A
Authority
CN
China
Prior art keywords
expression
conversation
original
conversational
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111491880.3A
Other languages
Chinese (zh)
Inventor
余自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111491880.3A priority Critical patent/CN116246310A/en
Priority to PCT/CN2022/124784 priority patent/WO2023103577A1/en
Publication of CN116246310A publication Critical patent/CN116246310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method for generating a target session expression. The method comprises the following steps: presenting at least two original conversational expressions in an original conversational expression area, wherein the at least two original conversational expressions comprise a first original conversational expression and a second original conversational expression; receiving first original conversation expression selection information, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression; receiving second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression; and presenting the target conversation expression, wherein the target conversation expression comprises a part of a conversation expression element of the first original conversation expression and a part of a conversation expression element of the second original conversation expression.

Description

Method and device for generating target conversation expression
Technical Field
The present application relates to the field of image processing, and in particular, to a method and apparatus for generating a target session expression, and a corresponding computing device, storage medium, and computer program product.
Background
In interpersonal communications, especially in software-based non-face-to-face communications, images can more fully express emotion than monotonous text because they are more vivid. The image for expressing emotion may be referred to as a conversational expression. Conversational expressions are increasingly being applied in software-based communication scenarios to express various emotions of users.
Currently, conversational expressions available to users are typically some conversational expressions built into a software system. Sometimes a third party may also make some conversational expressions for use in a particular software system. However, the number of these conversational expressions is relatively small, so that the choice of the user to be able to express the emotion by the conversational expression may be limited. For this reason, a user in need has to collect a large amount of various conversational expressions in advance, but often the actual utilization of these conversational expressions is low. Moreover, even if a user has collected a large number of conversational expressions in advance, there is no guarantee that at least one of these conversational expressions will conform to the mind of the user when desired by the user. And when the conversation expression conforming to the mind does not exist, the difficulty of making a new conversation expression by the user is also high. For example, this may involve steps of importing relevant material, adjusting conversational expression content, exporting conversational expressions, adding conversational expressions to specific software, and so on. The operation is difficult, the steps are tedious, the speed is low, and the willingness of a user to use the conversation expression is influenced.
Disclosure of Invention
In view of the above, the present disclosure provides methods and apparatus, computing devices, and computer storage media for generating target conversational expressions, it is desirable to overcome some or all of the above-mentioned drawbacks, as well as other possible drawbacks.
According to one aspect of the present application, a method of generating a target conversational expression is provided. The method comprises the following steps: presenting at least two original conversational expressions in an original conversational expression area, wherein the at least two original conversational expressions comprise a first original conversational expression and a second original conversational expression; receiving first original conversation expression selection information, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression; receiving second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression; and presenting the target conversation expression, wherein the target conversation expression comprises a part of a conversation expression element of the first original conversation expression and a part of a conversation expression element of the second original conversation expression.
In some embodiments, the method further comprises: after receiving the first original conversation expression selection information and the second original conversation expression selection information, presenting alternative conversation expressions, wherein each alternative conversation expression comprises a different combination of conversation expression elements of the first original conversation expression and conversation expression elements of the second original conversation expression; alternative conversation expression selection information is received, wherein the alternative conversation expression selection information indicates that one of the alternative conversation expressions is selected as the target conversation expression.
In some embodiments, the conversational element of the first original conversational expression includes a first conversational element, the conversational element of the second original conversational expression includes a second conversational element and a third conversational element, the first conversational element and the second conversational element belong to a same conversational element category, and the conversational element of the target conversational expression includes the first conversational element and the third conversational element; wherein the method further comprises: after receiving the second original conversation expression selection information, determining a relative arrangement relation of the second conversation expression element and the third conversation expression element in the second original conversation expression; after receiving the first original conversation expression selection information, determining arrangement parameters of the first conversation expression element in the first original conversation expression; and determining the arrangement parameters of the third session expression element in the target session expression based on the relative arrangement relation of the second session expression element and the third session expression element in the second original session expression and the arrangement parameters of the first session expression element so as to obtain the target session expression.
In some embodiments, the relative arrangement of the second conversational expression element and the third conversational expression element within the second original conversational expression includes: the size ratio of the third conversational expression element to the second conversational expression element, the degree of tilt of the third conversational expression element relative to the second conversational expression element, and the location of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element; the arrangement parameters of the first conversation expression element in the first original conversation expression include: in the first original conversation expression, a size of the first conversation expression element, a direction of the first conversation expression element, and a position of a geometric center of the first conversation expression element. And determining placement parameters of the third conversational expression element within the target conversational expression includes: determining a size of the third session expression element in the target session expression based on a size ratio of the third session expression element and the second session expression element and a size of the first session expression element; determining a direction of the third element in the target conversation expression based on a degree of inclination of the third conversation expression element relative to the second conversation expression element and a direction of the first conversation expression element; and determining a position of the geometric center of the third conversational expression element in the target conversational expression based on the position of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element and the position of the geometric center of the first conversational expression element in the first original conversational expression.
In some embodiments, the first original conversational expression and the target conversational expression are dynamic conversational expressions, the dynamic conversational expressions comprising a plurality of expression frames, and determining placement parameters of the first conversational expression element within the target conversational expression comprises: determining an arrangement parameter of the third conversation expression element in each expression frame in the plurality of expression frames of the target conversation expression.
In some embodiments, the first original conversational expression, the second original conversational expression, the target conversational expression are emotional expression conversational expressions for expressing emotions, conversational expression elements of the emotional expression conversational expressions including five elements, and wherein the method further comprises: after receiving the first original conversation expression selection information, determining whether a decoration element exists in the first original conversation expression, wherein the decoration element and the five-element belong to different conversation expression element categories; and in response to the existence of the decoration element in the first original conversation expression, adding the decoration element into the second original conversation expression so as to obtain the target conversation expression.
In some embodiments, the first original conversational expression, the second original conversational expression, the target conversational expression are emotional expression conversational expressions for expressing emotions, and conversational expression elements of the emotional expression conversational expressions include five elements. And the method further comprises: after receiving the first original conversation expression selection information, determining whether a decoration element exists in the first original conversation expression, wherein the decoration element and the five-element belong to different conversation expression element categories; and in response to the existence of the decoration element in the first original conversation expression, applying the decoration element to the rest original conversation expressions except the first original conversation expression in the original conversation expression area to obtain a decoration conversation expression.
In some embodiments, receiving the second original conversation expression selection information includes: receiving decoration conversation expression selection information, wherein the decoration conversation expression selection information indicates that one of the decoration conversation expressions is selected as the target conversation expression.
In some embodiments, the remaining original conversational expressions include a third original conversational expression, the third original conversational expression being an emotional expression conversational expression, and the third original conversational expression including five elements and decorative elements. The method further comprises the steps of: and changing the presentation mode of the third original conversation expression in response to the decoration element of the first original conversation expression being partially overlapped with the decoration element of the third original conversation expression.
In some embodiments, the method further comprises: identifying a first conversation expression element in the first original conversation expression and identifying a second conversation expression element in the second original conversation expression, wherein the first conversation expression element and the second conversation expression element belong to the same conversation expression element category; and replacing the first session expression element with the second session expression element in the first original session expression, or replacing the second session expression element with the first session expression element in the second original session expression, so as to obtain the target session expression.
In some embodiments, the method further comprises: aiming at every two original conversation expressions in an original conversation expression library, determining a prefabricated conversation expression corresponding to the every two original conversation expressions, wherein the prefabricated conversation expression comprises a part of conversation expression elements of each of the every two original conversation expressions; determining an identification number for the prefabricated conversation expression corresponding to each two original conversation expressions based on the identification number of each of the two original conversation expressions; after receiving the first original conversation expression selection information and the second original conversation expression selection information, determining the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression based on the identification numbers of the first original conversation expression and the identification numbers of the second original conversation expression; and determining the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression as the target conversation expression based on the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression.
In some embodiments, prior to receiving the first original conversational expression selection information, the method further includes: the first original conversational expression selection information is generated in response to detecting that the first original conversational expression is selected for a long time or detecting that the first original conversational expression moves from an original conversational expression region to a target conversational expression region for presenting the target conversational expression. And/or, before receiving the second original conversation expression selection information, the method further includes: and generating the second original conversation expression selection information in response to detecting that the second original conversation expression is selected for a long time or detecting that the second original conversation expression moves from an original conversation expression region to a target conversation expression region for presenting the target conversation expression.
According to an aspect of the present application, there is provided an apparatus for generating a target session expression, wherein the apparatus includes: an original conversational expression presentation module configured to: presenting at least two original conversational expressions in an original conversational expression area, wherein the at least two original conversational expressions comprise a first original conversational expression and a second original conversational expression; a first original conversation expression selection information receiving module configured to: receiving first original conversation expression selection information, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression; a second original conversation expression selection information receiving module configured to: receiving second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression; a target conversational expression presentation module configured to: and presenting the target conversation expression, wherein the target conversation expression comprises a part of a conversation expression element of the first original conversation expression and a part of a conversation expression element of the second original conversation expression.
According to yet another aspect of the present application, there is provided a computing device comprising a memory configured to store computer-executable instructions; a processor configured to perform a method according to any of the embodiments of the present application when the computer executable instructions are executed by the processor.
According to yet another aspect of the present application, a computer readable storage medium is provided, storing computer executable instructions that, when executed, perform a method according to any of the embodiments of the present application.
According to yet another aspect of the present application, there is provided a computer program product comprising computer executable instructions which when executed by a processor perform a method according to any of the embodiments of the present application.
According to the method for generating the target conversation expression, the target conversation expression can be obtained based on various existing combinations of original conversation expressions, so that information diversity can be greatly increased, and conversation expressions available to users are enriched. In addition, in the process of obtaining the target conversation expression, the user can combine the existing original conversation expressions at will, so that the method has high interestingness, the experience of the user is enhanced, and the viscosity of the user to related software is also increased. In addition, in the method, the target conversation expression is obtained in real time according to the selection of the user on the original conversation expression, and compared with the process of drawing a new conversation expression, the conversation expression wanted by the user is efficiently and quickly generated, and the conversation expression generation time is greatly shortened.
Drawings
Embodiments of the present disclosure will now be described in more detail and with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary application scenario in which a technical solution according to an embodiment of the present disclosure may be implemented;
FIG. 2 schematically illustrates a flow chart of a method of generating a target session expression according to an embodiment of the present application;
FIG. 3 schematically illustrates a schematic application scenario when implementing a method of generating a target conversational expression according to an embodiment of the application;
FIG. 4 schematically illustrates a flow chart of a method of generating a target session expression according to another embodiment of the present application;
FIG. 5 schematically illustrates an implementation scenario for presenting alternative conversational expressions based on an original conversational expression;
FIG. 6 schematically illustrates a flow chart of a method of generating a target session expression according to yet another embodiment of the present application;
fig. 7 schematically illustrates an implementation scenario of determining placement parameters of a conversational expression element of a target conversational expression based on the relative placement relationship of conversational expression elements of an original conversational expression;
FIG. 8 schematically illustrates a flow chart of a method of generating a target session expression according to yet another embodiment of the present application;
fig. 9 schematically illustrates an implementation scenario in which a decorative element is applied to the remaining original conversational expressions other than the first original conversational expression in the original conversational expression area;
Fig. 10 schematically illustrates a scenario in which an original conversational expression is not suitable for generating a target conversational expression;
FIG. 11 schematically illustrates an implementation scenario in which original conversational expressions of different origins are combined to arrive at a target conversational expression;
fig. 12 schematically shows an effect diagram of manually drawing a combined conversation expression of an original conversation expression in a case where five elements of the original conversation expression are difficult to align;
fig. 13 schematically illustrates an exemplary block diagram of an apparatus for generating a target session expression according to an embodiment of the present disclosure;
FIG. 14 schematically illustrates an example system that includes an example computing device that represents one or more systems and/or devices that may implement the various techniques described herein.
Detailed Description
The technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application. The described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In order to facilitate an understanding of embodiments of the present invention, several concepts will be briefly described as follows:
conversation expression: a conversation refers to a behavior in which a speaker expresses his opinion in network communication, including, but not limited to, a behavior in which opinions are expressed to unspecified persons in the form of a publication or comment, propagation information, or the like, and a behavior in which opinions are expressed to specific persons in the form of speech, conversation, or the like. The conversation expression refers to the expression of the emotion of a speaker in the conversation process, and the concrete expression form can be images, including static images and dynamic images. The composition of the conversation expression is exaggerated, and people can obtain interest when using the conversation expression. A specific example of a conversational expression may be an expression package image.
Session expression element: the elements constituting the conversational expression may in this application be in particular image elements in the conversational expression in the form of an image.
Five sense elements: one type of conversational expression element may include elements of eyes, nose, mouth, ears, eyebrows, and the like. When humans express emotion, their five sense organs are likely to change. The five-element can reflect various emotions of the speaker by drawing the shape, size, position and direction of the five-element part. The five elements are features typically contained in conversational expressions.
Decorative element: another type of conversational expression element may include elements such as hands, headwear, tears, etc. Such elements have a symbolic meaning when humans express emotions and may therefore also be comprised by conversational expressions.
The inventors found that in software-based communication, the conversational expressions used for expressing emotion have a distinct character, i.e. have a certain commonality. These commonalities are manifested in that most of these conversational expressions have five sense elements, and some conversational expressions additionally have decorative elements, which, even without decorative elements, leave a place to accommodate the decorative elements. Both the five elements and the decorative elements can be used to express emotion. The five elements may include, for example, eyes, nose, eyebrows, mouth. The conversational expression for expressing emotion generally uses a comparatively exaggerated drawing, and thus, in many conversational expressions, eyes and mouth are enlarged and nose is omitted. The kinds of decorative elements are very numerous. Examples of decorative elements include glasses, perspiration, tears, gestures, hats, and the like.
These commonalities make it easy for the conversational expression elements contained in the conversational expression to be combined with each other into a new conversational expression. For example, the decoration elements in one conversational expression may be applied to another conversational expression, or the conversation expression elements of the same kind (e.g., the same kind of five sense elements) of different conversational expressions may be replaced with each other, thereby generating a new conversational expression. In addition, as the conversation expressions almost all have five sense elements, when the conversation expression elements are replaced, the non-replaced five sense elements can be used for positioning the replaced elements, so that the combined effect of the obtained new conversation expression is more optimized.
Based on the above, the application provides a method for generating the target session expression. The method can quickly generate new conversation expressions based on the existing conversation expressions, so that the diversity of selectable conversation expressions of the user is increased, and the operation required by the user is relatively simple.
Fig. 1 schematically illustrates an exemplary network scenario 100 in which a solution according to an embodiment of the present application may be implemented. As shown in fig. 1, the network scenario 100 may include a server and a terminal device 110. The number of terminal devices and servers is not limited in this application. For example, the network scenario 100 may include one terminal device, such as terminal device 110, and a plurality of servers, such as servers 105a, 105b, 105c, etc. Different data may be stored on multiple servers or different operations may be performed for use by the terminal device. As shown in fig. 1, each server 105a, 105b, 105c may be connected to the terminal device 110, e.g. via a network, respectively, so that each server 105a, 105b, 105c may interact data with the terminal device 110.
The server in the present application may be, for example, an independent physical server, a server cluster or a distributed system composed of a plurality of physical servers 105a, 105b, 105c as shown in fig. 1, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network, and basic cloud computing services such as big data and an artificial intelligence platform. The terminal device may be, for example, an intelligent terminal such as a smart phone, a tablet computer, a notebook computer, a desktop computer, or an intelligent television.
Embodiments of the present application are described below taking communication between the terminal device 110 and the server 105a as an example. The method for generating the target session expression according to the embodiment of the present application may be completed in the terminal device 110, or may be completed in the server 105a and transmit the obtained target session expression to the terminal device 110, or may be completed by the terminal device 110 in cooperation with the server 105a, and the target session expression is presented in the terminal device 110. The original conversation expression used for generating the target conversation expression in the present application may be stored in the server 105a, for example, but may also be stored in the terminal device 110. The method for generating the target conversation expression according to the embodiment of the present application involves operations such as conversation expression presentation and receiving various conversation expression selection information of the user, which are generally completed in the terminal device 110. The method for generating the target session expression further comprises the operations of determining the relative arrangement relation of session expression elements, identifying the category of the session expression elements and the like, wherein the operations may be completed only in the terminal equipment 110, may be completed only in the server 105a, may be completed partially in the terminal equipment 110 and may be completed partially in the server 105 a. This is not limiting.
Fig. 2 schematically illustrates a flowchart of a method 200 of generating a target session expression according to an embodiment of the present application. As shown in fig. 2, the method 200 includes the following steps.
At step S205, at least two original conversational expressions are presented in the original conversational expression area, wherein the at least two original conversational expressions include a first original conversational expression and a second original conversational expression.
In step S210, first original conversation expression selection information is received, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression.
In step S215, second original conversation expression selection information is received, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression.
In step S220, the target conversational expression is presented, wherein the target conversational expression comprises a portion of a conversational expression element of the first original conversational expression and a portion of a conversational expression element of the second original conversational expression.
These steps will be described in detail below.
First, a step of presenting at least two original conversational expressions in the original conversational expression area, that is, step S205, will be described in detail.
When the method of the embodiment of the application generates the target session expression, the terminal device 110 may display a human-computer interaction interface. The target conversation expression is generated by utilizing the original conversation expression, so that the human-computer interaction interface correspondingly comprises an original conversation expression area and a target conversation expression area. The primitive conversation expression zone may present a plurality of primitive conversation expressions for viewing and selection by the user to generate a target conversation expression based on the selected primitive conversation expression. The original conversation expression presented in the original conversation expression area may be a conversation expression stored in the terminal device 110 in advance, or may be a conversation expression downloaded from the server 105a to the terminal device 110 and displayed after receiving a specific instruction. In some embodiments, the original conversation expression area may be an input area of a human-machine interaction interface, and the target conversation expression area may be a dialog display area.
In some embodiments, the displayed original conversational expression meets certain requirements. For example, in some embodiments, the displayed original conversational expressions are all facial conversational expressions. In a more specific embodiment, each facial conversation expression may include an eye element and a mouth element. In this case, all the displayed original conversational expressions will have similar conversational expression structures, so that the conversational expression elements within these original conversational expressions are more easily combined into a new conversational expression as the target conversational expression. The foregoing "more easily combined into a new conversational expression" is embodied in two aspects. In one aspect, since each original conversational expression has a similar conversational expression structure, when a conversational expression element in one original conversational expression is replaced with a conversational expression element in another original conversational expression to generate a new conversational expression, for example, when a conversational expression element in a first original conversational expression is replaced with a conversational expression element in a second original conversational expression to generate a new conversational expression, an arrangement parameter (e.g., a size, a direction, a position) of the element in the first original conversational expression in the new conversational expression may be determined with reference to an arrangement parameter of the conversational expression element in the second original conversational expression of the element. On the other hand, if each original conversational expression has two or more types of conversational expression elements that are identical, then when a replacement of one type of conversational expression element is made, another type of conversational expression element may be used as a reference to determine the placement of the replaced conversational expression element. Therefore, by enabling the displayed original conversation expression to meet specific requirements, the displayed original conversation expression can be ensured to be suitable for generating a new conversation expression to a great extent, namely, the situation that the displayed original conversation expression is not suitable for generating the new conversation expression is avoided, and further, the original conversation expression unsuitable for generating the new conversation expression is prevented from being selected by a user, which can reduce the efficiency of generating the target conversation expression.
In some embodiments, the displayed conversational expressions may be distributed, arranged, in a specific rule in the original conversational expression area. For example, the original conversation expression region may further include a pure five sense organ conversation expression region and a decoration conversation expression region. The pure five sense organs conversation expression area is used for presenting the original conversation expression only with the five sense organs elements, and the decoration conversation expression area is used for presenting the original conversation expression with the decoration elements besides the five sense organs elements. The five-element and the decoration element are session expression elements typically contained in the session expression. The above embodiment classifies the original conversation expressions based on the presence or absence of the decoration elements, reflects the essence of the conversation expressions for expressing the emotions, and is beneficial to the user to find the original conversation expressions meeting the requirements of the user.
Next, the step of receiving the first original conversation expression selection information, i.e., step S210, and the step of receiving the second original conversation expression selection information, i.e., step S215, are described in detail.
After the original conversation expression area presents at least two original conversation expressions, the user may operate through the human-computer interaction interface of the terminal device 110 to select an original conversation expression, such as a first original conversation expression and a second original conversation expression, among the displayed original conversation expressions for generating the target conversation expression. After the user selects the first original conversation expression, an indication input device of the terminal device 110, e.g., a touch sensor in a touch panel of the terminal device, may generate first original conversation expression selection information in response to the user's selection, the first original conversation expression selection information indicating that the first original conversation expression is selected for obtaining the target conversation expression. The first original session expression selection information is sent to the processor or server 105a of the terminal device 110. The processor or server 105a of the terminal device 110 receives the first original conversation expression selection information. Similarly, after the user selects the second original conversation expression, second original conversation expression selection information indicating that the second original conversation expression is selected for obtaining the target conversation expression is also received by the processor or server 105a of the terminal device 110.
In some embodiments, the user may perform some specific operation to indicate that the original conversation expression is selected for obtaining the target conversation expression. These operations should be distinguished from operations indicating that the original conversation expression was selected for addition to the speech input box or dialog box. For example, in some embodiments, the original conversational expression is typically entered into a speech input box or dialog box by clicking or touching the original conversational expression. In this case, the operation indicating that the original conversation expression is selected for obtaining the target conversation expression may be set to be different from clicking and touching. For example, in some embodiments, the operation indicating that the original conversation expression is selected for obtaining the target conversation expression may be set to long-time select (e.g., long press) the original conversation expression. In other embodiments, the operation indicating that the original conversation expression is selected for obtaining the target conversation expression may be set to move (e.g., drag) the original conversation expression to a particular region. For example, as mentioned above, the human-machine interaction interface may include an original conversational expression region and a target conversational expression region. Accordingly, the above-described operation may be to move the original conversation expression from the original conversation expression region to the target conversation expression region. Thus, the first raw conversation expression selection information may be generated in response to detecting that the first raw conversation expression is selected for a long time or detecting that the first raw conversation expression is moved from a raw conversation expression region to a target conversation expression region for presenting the target conversation expression, and/or the second raw conversation expression selection information may be generated in response to detecting that the second raw conversation expression is selected for a long time or detecting that the second raw conversation expression is moved from a raw conversation expression region to a target conversation expression region for presenting the target conversation expression.
In some embodiments, after receiving the first original conversation expression selection information, a display manner of the first original conversation expression may be changed. For example, the position of the first original conversation expression may be moved from the original conversation expression region to the target conversation expression region, or the size of the first original conversation expression may become large, or the first original conversation expression may appear sloshing or the like. In summary, the first original conversational expression prompts the user that it has been selected for obtaining the target conversational expression by changing its way of display. Similarly, after receiving the second original conversation expression selection information, the display manner of the second original conversation expression may also be changed.
The above-described method of generating a target conversation expression according to the present application is described taking as an example the generation of a target conversation expression based on two original conversation expressions. In other embodiments, the target conversational expression may also be generated based on three or more original conversational expressions, the principles of which are similar. In some embodiments, the number of original patterns that have been added to the selection may be displayed in the human-machine interface after the original conversation expression is selected, for example, in the target conversation expression region.
Next, a step of presenting the target session expression, that is, step S220 is described in detail.
In the method for generating the target conversation expression according to the embodiment of the application, the obtained target conversation expression is obtained based on the first original conversation expression and the second original conversation expression selected by the user. Thus, the presented target conversational expression comprises a portion of a conversational expression element of the first original conversational expression and a portion of a conversational expression element of the second original conversational expression. The specific manner of generating the target conversation expression based on the first and second original conversation expressions will be described later. And after the target session expression is generated, the target session expression is presented. In some embodiments, the target conversation expression is displayed in a target conversation expression region that is different from the original conversation expression region. After being displayed, the target session expression is available for further use by the user. For example, if the target conversation expression meets the user's requirements, the user may transfer the target conversation expression into a speech input box or dialog box for the next dialog. If the target conversation expression does not meet the user's requirements, the user may discard the target conversation expression and then perform other operations, such as attempting to generate other target conversation expressions, or directly using the original conversation expression, etc.
According to the method for generating the target conversation expression, the target conversation expression can be obtained based on various existing combinations of original conversation expressions, so that information diversity can be greatly increased, and conversation expressions available to users are enriched. In addition, in the process of obtaining the target conversation expression, the user can combine the existing original conversation expressions at will, so that the method has high interestingness, the experience of the user is enhanced, and the viscosity of the user to related software is also increased in a certain sense. In addition, in the method, the target conversation expression is obtained in real time according to the selection of the original conversation expression by the user, and the conversation expression generation time is greatly reduced compared with the drawing of a new conversation expression.
Fig. 3 schematically illustrates a schematic application scenario of a method of generating a target session expression when implemented according to an embodiment of the present application. As shown in the left portion of fig. 3, the human-machine interaction interface 300 includes an original conversational expression area 305 and a target conversational expression area 310. The original conversation expression area 305 may be a user input area and the target conversation expression area 310 may be a dialog display area. At least two original conversational expressions are presented in the original conversational expression area 305, via the aforementioned step S205. The user then selects a first original conversational expression 315 and a second original conversational expression 320 from the at least two original conversational expressions according to the preferences for obtaining the desired target conversational expression. In response to a user selection of an original conversational expression, first original conversational expression selection information and second original conversational expression selection information are received.
The user may move the first and second original conversation expressions 315 and 320 from the original conversation expression region 305 to the target conversation expression region 310, for example, by way of a drag, to indicate that the first and second original conversation expressions 315 and 320 are selected for obtaining the target conversation expression. For example, the arrow in the left part of fig. 3 shows the trajectory of the first original conversation expression 315 being dragged from the original conversation expression region 305 to the target conversation expression region 310. By this operation, the first original conversation expression selection information can be generated. This information is then received by terminal device 110.
In some embodiments, the manner in which the original conversational expression is displayed may change after being selected. As shown in fig. 3, for example, after the first original conversational expression 315 is selected, it is enlarged in the target conversational expression area 310 to prompt the user that the original conversational expression has been selected.
The arrow on the right part of fig. 3 shows the trajectory of the second original conversation expression 320 being dragged from the original conversation expression region 305 to the target conversation expression region 310. By this operation, second original conversation expression selection information can be generated, which is then also received by the terminal device 110.
Upon receiving the first and second raw conversational expression selection information, a target conversational expression 325, which is made up of a portion of the conversational expression element of the first raw conversational expression 315 and a portion of the conversational expression element of the second raw conversational expression 320, may be presented. For example, the decorative element (helmet) of the second original conversation expression 320 is combined with the five sense elements (eyes, mouth) of the first original conversation expression 315 to generate the target conversation expression 325 and presented in the target conversation expression region 310. The target conversation expression 325 includes five elements of eyes, mouth, etc. of the first original conversation expression 315, and also includes a decorative element of the second original conversation expression 320.
In some embodiments, the user may perform other operations to indicate that the displayed target session expression 325 meets or fails to meet the user's needs. For example, the user may click on the target conversation expression 325, indicating that the target conversation expression 325 meets the user's needs. In response to this operation, the target session expression 325 may be presented in the floor input box 330 or dialog box 335. As another example, the user may swipe a finger across the target conversation expression 325, indicating that the target conversation expression 325 is not satisfactory to the user. In response to this operation, the target session expression 325 may be removed from the human-machine interaction interface 300.
In some embodiments, the order of selection of the first original conversational expression and the selection of the second original conversational expression has a specific definition. For example, it may be defined that the original conversational expression selected by the user first is a first original conversational expression and the original conversational expression selected later is a second original conversational expression. That is, in these embodiments, the first received conversational expression selection information is the first original conversational expression selection information, and the second received conversational expression selection information is the second original conversational expression selection information. This sequence definition is advantageous in case the first and second original conversational expressions have different roles. The "role of the original conversational expression" should be understood as follows. In generating a target conversational expression based on a first original conversational expression and a second original conversational expression, the resulting target conversational expression includes a portion of a conversational expression element of the first original conversational expression and a portion of a conversational expression element of the second original conversational expression. In actual operation, the target conversational expression may be obtained by adding the conversational expression element of the first original conversational expression to the second original conversational expression, or by replacing the conversational expression element of the second original conversational expression with the conversational expression element of the first original conversational expression. In this case, the second original conversational expression may be regarded as a basic conversational expression, and the first original conversational expression may be regarded as an additional conversational expression. In this sense, the first original conversational expression and the second original conversational expression are different in function. Similarly, the first original conversational expression may be set as a base conversational expression and the second original conversational expression may be set as an additional conversational expression.
By defining the order of selection of the first original conversational expression and selection of the second original conversational expression, setting the first selected or the later selected original conversational expression as the base conversational expression or the additional conversational expression may be achieved. In this way, the construction or composition of the contents of the target conversation expression can be defined by the order in which the conversation expression selection information is received, so as to more accurately obtain the target conversation expression that the user wants to obtain. For example, by defining the first selected conversational expression as the first original conversational expression and defining the first original conversational expression as the base conversational expression (i.e., defining the later selected conversational expression as the second original conversational expression and defining the second original conversational expression as the additional conversational expression), it may be determined that the target conversational expression that the user wants to obtain is generated by replacing the conversational expression element of the first original conversational expression with the conversational expression element of the second original conversational expression based on the first original conversational expression, rather than by replacing the conversational expression element of the second original conversational expression with the conversational expression element of the first original conversational expression based on the second original conversational expression.
In a specific exemplary application, in the case where both original conversational expressions selected by the user have a five-element and a decoration element, and the five-element is preset as a main element in the original conversational expressions (i.e., an element that is a base of the original conversational expression and is not generally replaced), the decoration element is preset as an additional element in the original conversational expressions (i.e., an element that is a supplement to the original conversational expression and may be replaced as needed), by defining the first original conversational expression as a first original conversational expression, and defining the first original conversational expression as a base conversational expression, it may be determined that the target conversational expression that the user wants to obtain is constituted by the five-element of the first original conversational expression and the decoration element of the second original conversational expression, instead of the five-element of the second original conversational expression and the decoration element of the first original conversational expression. This ensures that the final target session expression meets the user's requirements.
In other embodiments, the selection of the first original conversational expression and the selection of the second original conversational expression are not sequentially separated. That is, it is not limited that the original conversational expression selected by the user first is the first original conversational expression, and the original conversational expression selected later is the second original conversational expression. Thus, the time of receiving the first original conversation expression selection information may be before the time of receiving the second original conversation expression selection information, or may be after the time of receiving the second original conversation expression selection information. In these embodiments, there is a distinction between the effect of not presetting the first original conversational expression and the second original conversational expression.
Since the difference exists between the roles of the first original conversation expression and the second original conversation expression, the first original conversation expression and the second original conversation expression can be used as the basic conversation expression or the additional conversation expression. In this case, based on the first original conversation expression and the second original conversation expression, a plurality of conversation expressions may be obtained as alternatives to the target conversation expression, simply referred to as alternative conversation expressions. Fig. 4 schematically illustrates a flow chart of a method of generating a target image session expression according to another embodiment of the present application. As shown in fig. 4, the method of generating a target conversation expression according to the embodiment of the present application includes the following steps in addition to the steps in fig. 2 without the difference in the roles of the preset first and second original image conversation expressions.
In step S405, after receiving the first original conversational expression selection information and the second original conversational expression selection information, presenting alternative conversational expressions, wherein each alternative conversational expression includes a different combination of conversational expression elements of the first original conversational expression and conversational expression elements of the second original conversational expression.
In step S410, alternative conversational expression selection information is received, wherein the alternative conversational expression selection information indicates that one of the alternative conversational expressions is selected as the target conversational expression.
The two steps described above are described below.
First, after receiving the first original conversation expression selection information and the second original conversation expression selection information, a processor or a server of the terminal device determines various combinations of conversation expression elements included in the first original conversation expression and the second original conversation expression to obtain various alternative conversation expressions. In some embodiments, it is required that each type of conversational expression element appear only once in each alternative conversational expression. For example, the original conversational expression may include three conversational expression elements, eye, mouth, hat, etc., and among the resulting alternative conversational expressions, only one eye conversational expression element, one mouth conversational expression element, and one hat conversational expression element should be included, as this is a common requirement for conversational expressions utilized in network communication. On the basis of the requirement, if the first original conversation expression comprises three conversation expression elements of a first eye, a first mouth, a first cap and the like, and the second original conversation expression comprises three conversation expression elements of a second eye, a second mouth, a second cap and the like, the following six alternative conversation expressions are obtained:
Alternative session expression one: comprises a first eye, a first mouth and a second cap;
alternative session expression two: comprises a first eye, a second mouth and a first cap;
alternative conversation expression three: comprises a first eye, a second mouth and a second cap;
alternative conversation expression four: comprises a second eye, a first mouth and a first cap;
alternative session expression five: comprises a second eye, a first mouth and a second cap;
alternative session expression six: comprises a second eye, a second mouth and a first hat.
After the processor or the server determines each alternative session expression, each alternative session expression may be displayed in the target session expression area. In some embodiments, these six alternative conversational expressions may all be presented in the target conversational expression area for selection by the user. In other embodiments, only a portion of the alternative conversational expressions may be presented. For example, a rule may be preset that requires that conversational expression elements belonging to the five elements maintain a matching relationship in the original conversational expression. For example, in the five sense elements of the first original conversation expression, the first eye and the first mouth are matched, and in the five sense elements of the second original conversation expression, the second eye and the second mouth are matched, then it may be required that in the presented alternative conversation expression, the first eye and the first mouth should be present or absent simultaneously, and the second eye and the second mouth should be present or absent simultaneously. In this case, the presented alternative conversational expression will not appear to contain the first eye and the second mouth at the same time, or the second eye and the first mouth at the same time. That is, the above-described alternative conversation expressions two, three, four, five will not be presented. The above embodiment considers the design factor of the original conversation expression, so that the obtained alternative patterns are more harmonious. In practice, the design of the pattern of eyes and mouth may be coordinated with each other when drawing the original conversational expression, the size, position, and tilt direction of the two having a relatively strong correlation. If the first eye and the second mouth, or the second eye and the first mouth, are presented simultaneously in a conversational expression, the cooperation of the two may not be coordinated. The likelihood that the alternative conversation expression obtained in this way is selected by the user as the target conversation expression is also relatively small. Thus, such alternative conversational expressions may not be presented.
Then, after determining the alternative conversation expressions to be presented, the alternative conversation expressions can be presented in the target conversation expression area for the user to select as the target conversation expression.
Fig. 5 schematically illustrates an implementation scenario for presenting alternative conversational expressions based on an original conversational expression.
As shown in fig. 5, the first original conversational expression 315 includes a first conversational expression element 505 and a second conversational expression element 510, and the second original conversational expression 320 includes a third conversational expression element 515 and a fourth conversational expression element 520. Wherein the first conversation emoticon 505 and the third conversation emoticon 515 both depict a mouth, both of which may be considered as conversation emoticons of the same category. The positions of the second and fourth conversational emoticons 510 and 520 are both eye positions, and both may be considered as conversational emoticons of the same category. Based on the first and second original conversational expressions 315, 320, a first and second alternative conversational expression 525, 530 may be obtained. According to the requirement that each type of conversational expression element only appears once, the first alternative conversational expression 525 comprises a first conversational expression element 505 and a fourth conversational expression element 520, and the second alternative conversational expression 530 comprises a second conversational expression element 510 and a third conversational expression element 515. The user may then select one of the first alternative conversational expression 525 and the second alternative conversational expression 530 as the target conversational expression. After the user makes the selection, the processor or server will receive alternative conversational expression selection information indicating that one of the alternative conversational expressions is selected as the target conversational expression.
As mentioned above, the designs of the patterns of eyes and mouth may be matched with each other when drawing the original conversation expression, and the sizes, positions and inclination directions of the two have a relatively strong correlation. This is just one example. In fact, each conversational expression element in the original conversational expression may be coordinated with each other. Therefore, the specific style of the related conversation expression element in the target conversation expression can be determined by referring to the relative arrangement relation of two conversation expression elements in the same original conversation expression and the relative arrangement relation of the same class of conversation expression elements in different original conversation expressions relative to the same reference frame.
Fig. 6 schematically illustrates a flow chart of a method of generating a target session expression according to another embodiment of the present application. The method of this example has the following preconditions. For example, the conversational expression element of the first original conversational expression includes a first conversational expression element, the conversational expression element of the second original conversational expression includes a second conversational expression element and a third conversational expression element, the first conversational expression element and the second conversational expression element belong to a same conversational expression element category, and the conversational expression element of the target conversational expression includes the first conversational expression element and the third conversational expression element. On the basis of these preconditions, as shown in fig. 6, the method includes the following steps in addition to steps S205 to S220 shown in fig. 2.
In step S605, after receiving the second original conversational expression selection information, a relative arrangement relationship of the second conversational expression element and the third conversational expression element within the second original conversational expression is determined.
In step S610, after receiving the first original conversational expression selection information, arrangement parameters of the first conversational expression element in the first original conversational expression are determined.
In step S615, based on the relative arrangement relation of the second session expression element and the third session expression element in the second original session expression and the arrangement parameter of the first session expression element, the arrangement parameter of the third session expression element in the target session expression is determined so as to obtain the target session expression.
These steps are described in detail below.
Since the session expression element of the target session expression includes the first session expression element of the first original session expression and the third session expression element of the second original session expression, and the first session expression element of the first original session expression and the second session expression element of the second original session expression belong to the same session expression element category, the relative arrangement relation of the first session expression element and the third session expression element in the target session expression can be determined based on the relative arrangement relation of the second session expression element and the third session expression element in the second original session expression, and then the arrangement parameter of the third session expression element in the target session expression is determined based on the arrangement parameter of the first session expression element in the target session expression.
Specifically, the relative arrangement relation of the second session expression element and the third session expression element in the second original session expression includes: the size ratio of the third conversational expression element to the second conversational expression element, the degree of tilt of the third conversational expression element relative to the second conversational expression element, and the location of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element. The size proportion of the conversational expression element may be determined based on one-dimensional data (e.g., length or width of the conversational expression element) or two-dimensional data (e.g., area of the conversational expression element). The relative inclination degree of the two session expression elements refers to the angle formed by the preset reference axes of the two session expression elements. In some embodiments, the preset reference axis may be a symmetrical center line of the conversational emoji, the conversational emoji may have equal areas on both sides of the symmetrical center line, and the graphics on both sides may be substantially symmetrical. In other embodiments, the preset reference axis may be the longest line of any two-point lines within the conversational expression. For example, for a conversational expressive element that approximates an ellipse, its preset reference axis may be the long axis of the conversational expressive element. In other embodiments, the elements may have specific preset reference axes for different conversational expressions. For example, for a mouth conversation expression element, its preset reference axis may be the line connecting the corners of the two ends of the mouth. As another example, for an eye conversation expression element, its preset reference axis may be a line connecting the geometric centers of the two eyes respectively. Geometric centers are common concepts in the graphics arts and are not specifically defined herein.
The arrangement parameters of the first conversation expression element in the first original conversation expression include: in the first original conversation expression, a size of the first conversation expression element, a direction of the first conversation expression element, and a position of a geometric center of the first conversation expression element. The size of the first session expression element may be represented by the aforementioned one-dimensional data or two-dimensional data. The direction of the first session emoticon may be the direction of a preset reference axis of the first session emoticon.
In the case that the above parameters are determined, determining the relative arrangement relationship of the first session expression element and the third session expression element within the target session expression includes: determining a size of the third session expression element in the target session expression based on a size ratio of the third session expression element and the second session expression element and a size of the first session expression element; determining a direction of the third element in the target conversation expression based on a degree of inclination of the third conversation expression element relative to the second conversation expression element and a direction of the first conversation expression element; and determining a position of the geometric center of the third conversational expression element in the target conversational expression based on the position of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element and the position of the geometric center of the first conversational expression element in the first original conversational expression.
These three sub-steps are further described below. First, based on the size ratio of the third session expression element and the second session expression element, the size ratio of the first session expression element and the third session expression element in the target session expression may be determined. When the third session expression element is added to the first original session expression to constitute the target session expression together with the first session expression element, the size of the third session expression element in the target session expression may be determined according to the size ratio of the first session expression element and the third session expression element and the size of the first session expression element.
Similarly, based on the degree of tilt of the third session expression element relative to the second session expression element, the degree of tilt of the third session expression element relative to the first session expression element in the target session expression may be determined. Then, based on the direction of the first element in the first conversation expression element, a direction of the third conversation expression element in the target conversation expression may be determined.
Based on the position of the geometric center of the third conversation expression element relative to the geometric center of the second conversation expression element, a position of the geometric center of the third conversation expression element relative to the geometric center of the first conversation expression element may be determined when the third conversation expression element is added to the first original conversation expression to constitute the target conversation expression together with the first conversation expression element. Then, based on the location of the geometric center of the first conversational expression element, a location of the geometric center of the third conversational expression element in the target conversational expression may be determined.
In some embodiments, the original conversational expression and the target conversational expression may be dynamic conversational expressions. Dynamic conversational expressions refer to conversational expressions comprising a plurality of expression frames. In this case, in determining the arrangement parameter of the third session expression element within the target session expression, a determination may be made for the relative arrangement relationship of the first session expression element and the third session expression element in each expression frame of the target session expression so as to determine the position parameter of each expression frame of the third session expression element within the target session expression. In this way, the added third session expression element may also have a dynamic effect.
Fig. 7 schematically illustrates an implementation scenario of determining a relative arrangement relation of conversational expression elements of a target conversational expression based on the relative arrangement relation of conversational expression elements of an original conversational expression. In this embodiment, a third conversational expression element 712 of the second original conversational expression 710 is added to the first original conversational expression 705 to arrive at a target conversational expression 715. Fig. 7 specifically depicts an implementation scenario in which placement parameters of the third conversational expression element 712 in the target conversational expression 715 are determined.
It should be clear at first that the original conversational expression and the target conversational expression of the present application may be multi-layered. For example, each conversational expression element may occupy a layer individually. As shown in fig. 7, the second original conversational expression 710 includes a plurality of layers relating to skin appearance colors, five sense organs, and decorative elements, respectively. The layers are sequentially overlapped according to the sequence of the appearance color, the five sense organs and the decoration elements of the skin, so that the final second original conversation expression 710 can be obtained. In this case, although the presented second original conversation expression 710 does not show the eye elements in the five sense organs, the related data of the eye elements are present in the second original conversation expression 710 and can be utilized in determining the relative arrangement relationship of the conversation expression elements.
In some embodiments, since the five sense organs in the conversational expressions generally adopt exaggerated drawing methods, conversational expression elements such as nose, ears, eyelashes, etc. may be omitted from these conversational expressions. Therefore, the conventional method of locating the five sense organs depending on the clear five sense organs may not be applicable. Thus, the conversational expression producer may be required to manually identify the size, location, and orientation of each of the five elements (e.g., the five element site labeling function may be provided in the conversational expression upload tool) when making the original conversational expression, and the corresponding values may be normalized to the [0,1] interval.
After the size, position and direction of each conversational expression element are determined, the relative arrangement relationship of the conversational expression elements can be determined. The following describes how to determine the relative arrangement relationship of the first session expression element and the third session expression element within the target session expression, taking the scenario schematically illustrated in fig. 7 as an example. Fig. 7 shows a first original conversational expression 705, a second original conversational expression 710, and a target conversational expression 715 generated based on the first original conversational expression 705 and the second original conversational expression 710. The first original conversation expression 705 includes a first conversation expression element 706, which may be an eye element in the first original conversation expression 705. The second original conversational expression 710 includes a second conversational expression element 711 and a third conversational expression element 712. The third conversational expression element 712 may be a decorative element (i.e., a sunglasses) in the second original conversational expression 710. The second conversational expression element 711 may be an eye element in the second original conversational expression 710 that is occluded by the decoration element. The first conversational expression element 706 and the second conversational expression element 711 are both eye elements, which may be considered to belong to the same conversational expression element category. The generated target conversation expression 715 includes the first conversation expression element 706 and the third conversation expression element 712.
The process of determining the relative arrangement relationship of the conversation expression elements within the second original conversation expression 710 is described in detail below.
To determine the relative arrangement of the second and third conversational expressions within the second original conversational expression, a distance d1 between the geometric center of the left and right lenses in the third conversational expression 712 may first be calculated, and then a distance d2 between the geometric center of the left and right eyes in the second conversational expression 711 may be calculated. Then, by calculating d1/d2, the size ratio r1=d1/d 2 of the third session expression element and the second session expression element can be obtained. Then, the size ratio of the third session expression element 712 to the first session expression element 706 in the target session expression 715 should also be r1.
Next, an included angle α1=a1-a2 between a direction a1 of a line connecting the geometric center of the left lens and the geometric center of the right lens in the third conversational expression element 712 (a numerical value of the direction a1 may represent an angle between the direction and the reference direction, for example) and a direction a2 of a line connecting the geometric center of the left eye and the geometric center of the right eye in the second conversational expression element 711 may be calculated, and α1 is an inclination degree of the third conversational expression element 712 with respect to the second conversational expression element 711. Then, in the target conversation expression 715, the inclination degree of the third conversation expression element 712 with respect to the first conversation expression element 706 should also be α1.
Also, the coordinates m1 (x 1, y 1) of the geometric center of the glasses as a whole in the third conversational expression element 712, and the coordinates m2 (x 2, y 2) of the geometric centers of the two eyes as a whole in the second conversational expression element 711 may be calculated. Then by calculating m1-m2, a position of the geometric center of the third session expression element with respect to the geometric center of the second session expression element can be obtained, which relative position can be represented by a vector s1 (x 1-x2, y1-y 2). Then, in the target conversation expression 715, the position of the geometric center of the third conversation expression element with respect to the geometric center of the first conversation expression element should also be vector s1.
Based on r1, α1, and s1, a relative arrangement relationship (r 1, α1, s 1) of the second and third conversational expression elements within the second original conversational expression may be determined.
Then, in order to determine the relative arrangement relation of the first and second session expression elements with respect to the same reference frame, it is also necessary to determine the size of the first session expression element 706, the direction of the first session expression element 706, and the position of the geometric center of the first session expression element 706 in the first original session expression 705.
The size of the first dialog expression element 706 may be represented by a distance d3 between the geometric center of the left eye and the geometric center of the right eye in the first dialog expression element 706. Thus, in the target conversation expression, the distance d between the geometric center of the left lens and the geometric center of the right lens of the third conversation expression element 712 should satisfy d=d3×r1=d3×d1/d2. The direction of the first dialog expression element 706 may be represented by a direction a3 of a line connecting between the geometric center of the left eye and the geometric center of the right eye of the first dialog expression element 706. Therefore, in the target conversation expression, the direction a of the line connecting the geometric center of the left lens and the geometric center of the right lens of the third conversation expression element 712 should satisfy a=a3+α1=a3+a1-a 2.
The location of the geometric center of the first session emoticon 706 may be represented by a set of coordinates m3 (x 3, y 3). Therefore, in the target session expression, the coordinates m (x, y) of the position of the geometric center of the third session expression element 712 should be m3 (x 3, y 3) +s1, i.e., the abscissa x=x3+x1-x 2, and the ordinate y=y3+y1-y 2.
Through the calculation and the processing, the size, the direction and the geometric center position of the third session expression element in the target session expression can be determined, namely, the arrangement parameters of the third session expression element in the target session expression are determined.
In some embodiments, the conversational expressions discussed herein are emotional expression conversational expressions for expressing emotions. As mentioned before, a commonality of the emotional expression session expression is that it generally has five elements and whether or not a decorative element is present, the emotional expression session expression generally has a space to accommodate the decorative element. In this case, it may be determined whether a decoration element exists in the first original conversational expression after the first original conversational expression selection information is received; and in response to the presence of a decorative element in the first original conversational expression, adding the decorative element to the second original conversational expression to obtain the target conversational expression. This scheme is based on the commonality of the emotional expression conversational expressions, assuming that the decorative element is a conversational expression element that needs to be migrated (i.e., moved from a first original conversational expression to a second original conversational expression). After recognizing that the decoration element exists in the first original conversation expression, only the decoration element may be operated without changing other elements. This can reduce the amount of computation of the method and reduce the time required to generate the target session expression.
Fig. 8 schematically illustrates a flow chart of a method of generating a target session expression according to a further embodiment of the present application. As shown in fig. 8, in this embodiment, the method includes the following steps in addition to the steps in fig. 2.
In step S805, after receiving the first original conversational expression selection information, it is determined whether a decoration element exists in the first original conversational expression, where the decoration element and the five-element belong to different conversational expression element categories.
In step S810, in response to the presence of the decoration element in the first original conversational expression, the decoration element is applied to the rest of the original conversational expressions except the first original conversational expression in the original conversational expression area, so as to obtain a decoration conversational expression.
These steps are described in detail below.
Firstly, after a user selects a first original conversation expression, whether decoration elements exist in the original conversation expression can be judged. This process can be implemented by a conversational expression processing method based on computer vision. For example, conversational expression element content detection may be performed on the original conversational expression to determine the location and size of each conversational expression element within the original conversational expression. Then, feature vector extraction can be performed on each element to obtain feature vectors corresponding to each conversational expression element. Then, the feature vector corresponding to each conversational expression element may be input as an input into the conversational expression element recognition model to determine whether the targeted conversational expression element is a decorative element.
Then, after detecting that a decoration element exists in the first original conversation expression, the decoration element may be applied to the rest of original conversation expressions except for the first original conversation expression in the original conversation expression area. When the decoration element is applied, the size, the position and the direction of the decoration element in the rest original conversation expressions can be adjusted by referring to the embodiment described above. Through this operation, the user can quickly preview the combined result of all the remaining original conversational expressions of the decoration element. If the combined decoration conversation expression meets the requirement of the user, the user can select the decoration conversation expression as the target conversation expression. In this case, the step of receiving the second original conversation expression selection information (S215) described above may specifically include: receiving decoration conversation expression selection information, wherein the decoration conversation expression selection information indicates that one of the decoration conversation expressions is selected as the target conversation expression. If all of the combined results do not meet the user's needs, the user may abort the operation.
Fig. 9 schematically illustrates an implementation scenario in which a decoration element is applied to the remaining original conversational expressions other than the first original conversational expression in the original conversational expression area. As shown in fig. 9, the first original conversational expression 315 selected by the user carries a decorative element, glasses. After recognizing that the first original conversation expression 315 carries a decoration element, the decoration element may be applied to the remaining original conversation expressions in the original conversation expression region 305 except for the first original conversation expression 315. Through this operation, the user can quickly preview the effect of each of the remaining original conversation expressions combined with the glasses. The user may select among the resulting conversational expressions.
The inventors have also found that not any two original conversational expressions are suitable for combining to generate a target conversational expression. For example, if there is an incomplete occlusion of the applied decorative element with the existing decorative element, it may not be suitable for generating the target conversation expression. Referring to the situation schematically depicted in fig. 10, fig. 10 schematically shows a scenario where the original conversation expression is not suitable for generating the target conversation expression, wherein the first conversation expression contains a helmet and the second conversation expression contains glasses. If glasses are applied to the first conversation expression, a part of the helmet is shielded, and the obtained conversation expression is contrary to common sense and does not meet the requirement of a user. If the helmet is applied to the second conversation expression, the helmet will completely cover the eyes. While this may be undesirable from an aesthetic point of view, this combination does not violate common sense. Thus, in case there is a complete occlusion of the applied decorative element with the existing decorative element, the resulting conversational expression may be available.
Aiming at the situation of incomplete shielding, the method for generating the target session expression further comprises the following steps: in the case where the remaining original conversation expressions include a third original conversation expression which is an emotional expression conversation expression, and the third original conversation expression includes a five-element and a decoration element, changing a presentation manner of the third original conversation expression in response to the decoration element of the first original conversation expression partially overlapping with the decoration element of the third original conversation expression. The presentation style includes at least one of brightness, gray scale, and position of the conversational expression. The above-described embodiments are mainly described based on the knowledge that the five elements are the main elements in the original conversation expression and the decoration elements are the additional elements in the original conversation expression, but the present application is not limited thereto. In some embodiments, only the conversational expression element may be classified without further identifying whether the conversational expression element is a subject element or an additional element. In this case, the method of generating the target session expression further includes: identifying a first conversation expression element in the first original conversation expression and identifying a second conversation expression element in the second original conversation expression, wherein the first conversation expression element and the second conversation expression element belong to the same conversation expression element category; and replacing the first session expression element with the second session expression element in the first original session expression, or replacing the second session expression element with the first session expression element in the second original session expression, so as to obtain the target session expression. By the above-described operation, the conversation expression element can be replaced with the same class based on the category, which is particularly advantageous for the enrichment of the conversation expression without the decoration element.
The application is equally suitable for combining conversational expressions of different origin. FIG. 11 schematically illustrates an original session to be sourced differently and combining the expressions to obtain an implementation scene of the target session expression. As shown in fig. 11, two conversational expressions from different producers, having different styles, are combined. For this case, because the overall outline and construction of the conversation expression are greatly different, the conversation expression producer may be required to mark the positions of the conversation expression elements when uploading the conversation expression, so that two conversation expressions of different styles can be aligned. For alignable expressions, then new conversational expressions can be generated by combining. In the example of fig. 11, although the two conversational expressions have different styles, since conversational expression element positions are noted, conversational expression elements can be replaced while maintaining the same conversational expression element positions.
Because of the richness of the conversational expressions, some conversational expressions cannot be aligned with the five-element elements or the corresponding five-element elements do not exist. Thus, in some embodiments, the combined effect of these conversational expressions may be manually drawn as a pre-made conversational expression. When the user needs to generate the corresponding expression combination, the corresponding prefabricated conversation expression is directly obtained. For example, in some embodiments, the method further comprises: and aiming at every two original conversation expressions in the original conversation expression library, determining a prefabricated conversation expression corresponding to the every two original conversation expressions, wherein the prefabricated conversation expression comprises a part of conversation expression elements of each of the every two original conversation expressions. Fig. 12 schematically shows an effect diagram of manually drawing a combined conversation expression of original conversation expressions in a case where the five elements are difficult to align.
After the combination session expression is drawn, the following method can be adopted to search the corresponding combination session expression. Firstly, after manually drawing a prefabricated conversation expression based on original conversation expressions, the identification numbers can be determined for the prefabricated conversation expressions corresponding to each two original conversation expressions based on the respective identification numbers of each two original conversation expressions. Specifically, each original conversational expression may be assigned a unique one of the identification numbers (e.g., md5 information summary). The MD5 message digest algorithm is a cryptographic hash function that produces a 128 bit (16 byte) hash value. After the drawing of the pre-prepared conversation expression is completed, an identification number can be provided for the threshold conversation expression. For example, the sum or average of the md5 values of the two original conversational expressions may be determined as the identification number of the pre-formed conversational expression. After the first original conversation expression selection information and the second original conversation expression selection information are received, based on the identification number of the first original conversation expression and the identification number of the second original conversation expression, the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression can be determined through an algorithm such as averaging. And then, determining the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression as the target conversation expression based on the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression. In this embodiment, after the user selects the first original conversational expression and the second original conversational expression, the prefabricated conversational expressions corresponding to the first original conversational expression and the second original conversational expression may be determined according to, for example, the md5 value. The embodiment solves the problem that the original conversation expression cannot be automatically combined into a new conversation expression due to the fact that the five sense elements cannot be aligned, and has quick response time.
Fig. 13 schematically depicts an exemplary block diagram of an apparatus 1300 for generating a target session expression according to an embodiment of the present application. As shown in fig. 13, the apparatus for generating a target session expression includes: the original conversational expression presentation module 1310, the first original conversational expression selection information receiving module 1320, the second original conversational expression selection information receiving module 1330, and the target conversational expression presentation module 1340. The primitive conversational expression presentation module 1310 is configured to present at least two primitive conversational expressions in a primitive conversational expression region, wherein the at least two primitive conversational expressions include a first primitive conversational expression and a second primitive conversational expression. The first raw conversational expression selection information receiving module 1320 is configured to receive first raw conversational expression selection information, wherein the first raw conversational expression selection information indicates that the first raw conversational expression is selected for obtaining the target conversational expression. The second original conversation expression selection information receiving module 1330 is configured to receive second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression. The target conversational expression rendering module 1340 is configured to render the target conversational expression, wherein the target conversational expression includes a portion of conversational expression elements of the first original conversational expression and a portion of conversational expression elements of the second original conversational expression.
Fig. 14 illustrates an example system 1400 that includes an example computing device 1410 that represents one or more systems and/or devices that can implement the various techniques described herein. Computing device 1410 may be, for example, a server of a service provider, a device associated with a server, a system-on-a-chip, and/or any other suitable computing device or computing system. The apparatus 1300 of generating a target session expression described above with reference to fig. 13 may take the form of a computing device 1410. Alternatively, the apparatus 1300 that generates the target session expression may be implemented as a computer program in the form of an application 1416.
The example computing device 1410 as illustrated includes a processing system 1411, one or more computer-readable media 1412, and one or more I/O interfaces 1413 communicatively coupled to each other. Although not shown, computing device 1410 may also include a system bus or other data and command transfer system that couples the various components to one another. A system bus may include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
The processing system 1411 is representative of functionality to perform one or more operations using hardware. Thus, the processing system 1411 is illustrated as including hardware elements 1414 that may be configured as processors, functional blocks, and the like. This may include implementation in hardware as application specific integrated circuits or other logic devices formed using one or more semiconductors. The hardware element 1414 is not limited by the material from which it is formed or the processing mechanism employed therein. For example, the processor may be comprised of semiconductor(s) and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, the processor-executable instructions may be electronically-executable instructions.
Computer-readable media 1412 is illustrated as including memory/storage 1415. Memory/storage 1415 represents memory/storage capacity associated with one or more computer-readable media. The memory/storage 1415 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage 1415 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) and removable media (e.g., flash memory, a removable hard drive, an optical disk, and so forth). Computer-readable medium 1412 may be configured in a variety of other ways as described further below.
One or more I/O interfaces 1413 represent functionality that allows a user to input commands and information to computing device 1410 using various input devices, and optionally also allows information to be presented to the user and/or other components or devices using various output devices. Examples of input devices include keyboards, cursor control devices (e.g., mice), microphones (e.g., for voice input), scanners, touch functions (e.g., capacitive or other sensors configured to detect physical touches), cameras (e.g., motion that does not involve touches may be detected as gestures using visible or invisible wavelengths such as infrared frequencies), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, a haptic response device, and so forth. Accordingly, computing device 1410 may be configured in a variety of ways to support user interaction as described further below.
Computing device 1410 also includes applications 1416. The application 1416 may be, for example, a software instance of the apparatus 1300 that generates the target session expression, and implements the techniques described herein in combination with other elements in the computing device 1410.
Various techniques may be described herein in the general context of software hardware elements or program modules. Generally, these modules include routines, programs, objects, elements, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer-readable media can include a variety of media that are accessible by computing device 1010. By way of example, and not limitation, computer readable media may comprise "computer readable storage media" and "computer readable signal media".
"computer-readable storage medium" refers to a medium and/or device that can permanently store information and/or a tangible storage device, as opposed to a mere signal transmission, carrier wave, or signal itself. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware such as volatile and nonvolatile, removable and non-removable media and/or storage devices implemented in methods or techniques suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits or other data. Examples of a computer-readable storage medium may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, hard disk, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture adapted to store the desired information and which may be accessed by a computer.
"computer-readable signal medium" refers to a signal bearing medium configured to hardware, such as to send instructions to computing device 1010 via a network. Signal media may typically be embodied in computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, data signal, or other transport mechanism. Signal media also include any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As previously described, the hardware elements 1414 and computer-readable media 1412 represent instructions, modules, programmable device logic, and/or fixed device logic implemented in hardware that, in some embodiments, may be used to implement at least some aspects of the techniques described herein. The hardware elements may include integrated circuits or components of a system on a chip, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), and other implementations in silicon or other hardware devices. In this context, the hardware elements may be implemented as processing devices that perform program tasks defined by instructions, modules, and/or logic embodied by the hardware elements, as well as hardware devices that store instructions for execution, such as the previously described computer-readable storage media.
Combinations of the foregoing may also be used to implement the various techniques and modules described herein. Thus, software, hardware, or program modules, and other program modules may be implemented as one or more instructions and/or logic embodied on some form of computer readable storage medium and/or by one or more hardware elements 1414. Computing device 1410 may be configured to implement particular instructions and/or functions corresponding to software and/or hardware modules. Thus, for example, modules may be implemented at least in part in hardware using a computer-readable storage medium of a processing system and/or hardware elements 1414 as modules executable by computing device 1410 as software. The instructions and/or functions may be executable/operable by one or more articles of manufacture (e.g., one or more computing devices 1410 and/or processing systems 1411) to implement the techniques, modules, and examples described herein.
In various implementations, the computing device 1410 may take on a variety of different configurations. For example, computing device 1410 may be implemented as a computer-like device including a personal computer, desktop computer, multi-screen computer, laptop computer, netbook, and the like. Computing device 1410 may also be implemented as a mobile appliance-like device that includes mobile devices such as mobile phones, portable music players, portable gaming devices, tablet computers, multi-screen computers, and the like. Computing device 1410 may also be implemented as a television-like device that includes devices having or connected to generally larger screens in casual viewing environments. Such devices include televisions, set-top boxes, gaming machines, and the like.
The techniques described herein may be supported by these various configurations of computing device 1410 and are not limited to the specific examples of techniques described herein. The functionality may also be implemented in whole or in part on the "cloud" 1420 using a distributed system, such as through platform 1422 as described below.
Cloud 1420 includes and/or is representative of a platform 1422 for resources 1424. Platform 1422 abstracts underlying functionality of hardware (e.g., servers) and software resources of cloud 1420. The resources 1424 may include applications and/or data that may be used when executing computer processing on servers remote from the computing device 1410. Resources 1424 may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks.
Platform 1422 may abstract resources and functionality to connect computing device 1410 with other computing devices. Platform 1422 may also be used to abstract a hierarchy of resources to provide a corresponding level of hierarchy of requirements encountered for resources 1424 implemented via platform 1422. Thus, in an interconnected device embodiment, the implementation of the functionality described herein may be distributed throughout the system 1400. For example, functionality may be implemented in part on computing device 1410 and by platform 1422 which abstracts the functionality of cloud 1420.
The present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computing device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computing device to perform the method of generating a target session expression provided in the various alternative implementations described above.
It should be understood that for clarity, embodiments of the present disclosure have been described with reference to different functional units. However, it will be apparent that the functionality of each functional unit may be implemented in a single unit, in a plurality of units or as part of other functional units without departing from the present disclosure. For example, functionality illustrated to be performed by a single unit may be performed by multiple different units. Thus, references to specific functional units are only to be seen as references to suitable units for providing the described functionality rather than indicative of a strict logical or physical structure or organization. Thus, the present disclosure may be implemented in a single unit or may be physically and functionally distributed between different units and circuits.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, components or sections, these devices, elements, components or sections should not be limited by these terms. These terms are only used to distinguish one device, element, component, or section from another device, element, component, or section.
Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present disclosure is limited only by the appended claims. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the term "a" or "an" does not exclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.

Claims (16)

1. A method of generating a target session expression, the method comprising:
presenting at least two original conversational expressions in an original conversational expression area, wherein the at least two original conversational expressions comprise a first original conversational expression and a second original conversational expression;
receiving first original conversation expression selection information, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression;
receiving second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression;
and presenting the target conversation expression, wherein the target conversation expression comprises a part of a conversation expression element of the first original conversation expression and a part of a conversation expression element of the second original conversation expression.
2. The method of claim 1, wherein the method further comprises:
after receiving the first original conversation expression selection information and the second original conversation expression selection information, presenting alternative conversation expressions, wherein each alternative conversation expression comprises a different combination of conversation expression elements of the first original conversation expression and conversation expression elements of the second original conversation expression;
Alternative conversation expression selection information is received, wherein the alternative conversation expression selection information indicates that one of the alternative conversation expressions is selected as the target conversation expression.
3. The method of claim 1, wherein the conversational element of the first original conversational expression comprises a first conversational element, the conversational element of the second original conversational expression comprises a second conversational element and a third conversational element, the first conversational element and the second conversational element belong to a same conversational element category, and the conversational element of the target conversational expression comprises the first conversational element and the third conversational element; wherein the method further comprises:
after receiving the second original conversation expression selection information, determining a relative arrangement relation of the second conversation expression element and the third conversation expression element in the second original conversation expression;
after receiving the first original conversation expression selection information, determining arrangement parameters of the first conversation expression element in the first original conversation expression;
and determining the arrangement parameters of the third session expression element in the target session expression based on the relative arrangement relation of the second session expression element and the third session expression element in the second original session expression and the arrangement parameters of the first session expression element in the first original session expression so as to obtain the target session expression.
4. The method of claim 3, wherein the relative arrangement of the second conversational emoji and the third conversational emoji within the second original conversational emoji comprises: the size ratio of the third conversational expression element to the second conversational expression element, the degree of tilt of the third conversational expression element relative to the second conversational expression element, and the location of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element;
the arrangement parameters of the first conversation expression element in the first original conversation expression include: in the first original conversation expression, a size of the first conversation expression element, a direction of the first conversation expression element, and a position of a geometric center of the first conversation expression element; and is also provided with
Wherein determining the placement parameters of the third conversational expression element within the target conversational expression includes:
determining a size of the third session expression element in the target session expression based on a size ratio of the third session expression element and the second session expression element and a size of the first session expression element;
Determining a direction of the third element in the target conversation expression based on a degree of inclination of the third conversation expression element relative to the second conversation expression element and a direction of the first conversation expression element; and
determining a position of a geometric center of the third conversational expression element in the target conversational expression based on a position of the geometric center of the third conversational expression element relative to the geometric center of the second conversational expression element and a position of the geometric center of the first conversational expression element in the first original conversational expression.
5. The method of claim 3, wherein the first original conversational expression and the target conversational expression are dynamic conversational expressions comprising a plurality of expression frames, and determining placement parameters of the third conversational expression element within the target conversational expression comprises:
determining an arrangement parameter of the third conversation expression element in each expression frame in the plurality of expression frames of the target conversation expression.
6. The method of claim 1, wherein the conversational expression elements of each of the first original conversational expression, the second original conversational expression, and the target conversational expression comprise five elements, and wherein the method further comprises:
After receiving the first original conversation expression selection information, determining whether a decoration element exists in the first original conversation expression, wherein the decoration element and the five-element belong to different conversation expression element categories;
and in response to the existence of the decoration element in the first original conversation expression, adding the decoration element into the second original conversation expression so as to obtain the target conversation expression.
7. The method of claim 1, wherein the conversational expression elements of each of the first original conversational expression, the second original conversational expression, and the target conversational expression comprise five elements, and wherein the method further comprises:
after receiving the first original conversation expression selection information, determining whether a decoration element exists in the first original conversation expression, wherein the decoration element and the five-element belong to different conversation expression element categories;
and in response to the existence of the decoration element in the first original conversation expression, applying the decoration element to the rest original conversation expressions except the first original conversation expression in the original conversation expression area to obtain a decoration conversation expression.
8. The method of claim 7, wherein receiving second original conversational expression selection information comprises:
receiving decoration conversation expression selection information, wherein the decoration conversation expression selection information indicates that one of the decoration conversation expressions is selected as the target conversation expression.
9. The method of claim 7, wherein the remaining original conversational expressions in the original conversational expression region other than the first original conversational expression include a third original conversational expression, and the third original conversational expression includes a five element and a decorative element, wherein the method further comprises:
and changing the presentation mode of the third original conversation expression in response to the decoration element of the first original conversation expression being partially overlapped with the decoration element of the third original conversation expression.
10. The method of claim 1, wherein the method further comprises:
identifying a first conversation expression element in the first original conversation expression and identifying a second conversation expression element in the second original conversation expression, wherein the first conversation expression element and the second conversation expression element belong to the same conversation expression element category; and, in addition, the processing unit,
And replacing the first session expression element with the second session expression element in the first original session expression, or replacing the second session expression element with the first session expression element in the second original session expression, so as to obtain the target session expression.
11. The method of claim 1, wherein the method further comprises:
aiming at every two original conversation expressions in an original conversation expression library, determining a prefabricated conversation expression corresponding to the every two original conversation expressions, wherein the prefabricated conversation expression comprises a part of conversation expression elements of each of the every two original conversation expressions;
determining an identification number for the prefabricated conversation expression corresponding to each two original conversation expressions based on the identification number of each of the two original conversation expressions;
after receiving the first original conversation expression selection information and the second original conversation expression selection information, determining the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression based on the identification numbers of the first original conversation expression and the identification numbers of the second original conversation expression;
And determining the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression as the target conversation expression based on the identification numbers of the prefabricated conversation expressions corresponding to the first original conversation expression and the second original conversation expression.
12. The method of claim 1, wherein prior to receiving the first original conversational expression selection information, the method further comprises:
generating the first original conversation expression selection information in response to detecting that the first original conversation expression is selected for a long time or detecting that the first original conversation expression moves from an original conversation expression region to a target conversation expression region for presenting the target conversation expression;
and/or the number of the groups of groups,
before receiving the second original conversation expression selection information, the method further includes:
and generating the second original conversation expression selection information in response to detecting that the second original conversation expression is selected for a long time or detecting that the second original conversation expression moves from an original conversation expression region to a target conversation expression region for presenting the target conversation expression.
13. An apparatus for generating a target session expression, the apparatus comprising:
An original conversational expression presentation module configured to: presenting at least two original conversational expressions in an original conversational expression area, wherein the at least two original conversational expressions comprise a first original conversational expression and a second original conversational expression;
a first original conversation expression selection information receiving module configured to: receiving first original conversation expression selection information, wherein the first original conversation expression selection information indicates that the first original conversation expression is selected for obtaining the target conversation expression;
a second original conversation expression selection information receiving module configured to: receiving second original conversation expression selection information, wherein the second original conversation expression selection information indicates that the second original conversation expression is selected for obtaining the target conversation expression;
a target conversational expression presentation module configured to: and presenting the target conversation expression, wherein the target conversation expression comprises a part of a conversation expression element of the first original conversation expression and a part of a conversation expression element of the second original conversation expression.
14. A computing device, comprising:
a memory configured to store computer-executable instructions;
A processor configured to perform the method according to any of claims 1-12 when the computer executable instructions are executed by the processor.
15. A computer readable storage medium storing computer executable instructions which, when executed, perform the method of any one of claims 1-12.
16. A computer program product comprising computer executable instructions which when executed by a processor perform the method according to any of claims 1-12.
CN202111491880.3A 2021-12-08 2021-12-08 Method and device for generating target conversation expression Pending CN116246310A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111491880.3A CN116246310A (en) 2021-12-08 2021-12-08 Method and device for generating target conversation expression
PCT/CN2022/124784 WO2023103577A1 (en) 2021-12-08 2022-10-12 Method and apparatus for generating target conversation emoji, computing device, computer readable storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111491880.3A CN116246310A (en) 2021-12-08 2021-12-08 Method and device for generating target conversation expression

Publications (1)

Publication Number Publication Date
CN116246310A true CN116246310A (en) 2023-06-09

Family

ID=86630106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111491880.3A Pending CN116246310A (en) 2021-12-08 2021-12-08 Method and device for generating target conversation expression

Country Status (2)

Country Link
CN (1) CN116246310A (en)
WO (1) WO2023103577A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150067538A1 (en) * 2013-09-03 2015-03-05 Electronics And Telecommunications Research Institute Apparatus and method for creating editable visual object
CN106033337B (en) * 2015-03-13 2019-07-16 腾讯科技(深圳)有限公司 A kind of instant messaging emoticon generation method and device
CN106447747B (en) * 2016-09-26 2021-11-02 北京小米移动软件有限公司 Image processing method and device
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN111162993B (en) * 2019-12-26 2022-04-26 上海连尚网络科技有限公司 Information fusion method and device

Also Published As

Publication number Publication date
WO2023103577A1 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
US10924676B2 (en) Real-time visual effects for a live camera view
WO2019223421A1 (en) Method and device for generating cartoon face image, and computer storage medium
US20210405831A1 (en) Updating avatar clothing for a user of a messaging system
US9679416B2 (en) Content creation tool
US11818286B2 (en) Avatar recommendation and reply
KR20220108162A (en) Context sensitive avatar captions
US11758080B2 (en) DIY effects image modification
US20240029373A1 (en) 3d captions with face tracking
US20220254143A1 (en) Method and apparatus for determining item name, computer device, and storage medium
CN110782515A (en) Virtual image generation method and device, electronic equipment and storage medium
US11636657B2 (en) 3D captions with semantic graphical elements
US10755487B1 (en) Techniques for using perception profiles with augmented reality systems
US10705720B2 (en) Data entry system with drawing recognition
US20150067538A1 (en) Apparatus and method for creating editable visual object
US20230091214A1 (en) Augmented reality items based on scan
US11430158B2 (en) Intelligent real-time multiple-user augmented reality content management and data analytics system
US11876634B2 (en) Group contact lists generation
US20230120037A1 (en) True size eyewear in real time
WO2023077965A1 (en) Appearance editing method and apparatus for virtual pet, and terminal and storage medium
US20220319082A1 (en) Generating modified user content that includes additional text content
CN116246310A (en) Method and device for generating target conversation expression
WO2022212669A1 (en) Determining classification recommendations for user content
US10126821B2 (en) Information processing method and information processing device
CN107168978B (en) Message display method and device
US20240135649A1 (en) System and method for auto-generating and sharing customized virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40086926

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination