CN118055091A - Expression processing method, device, equipment, storage medium and product based on session - Google Patents

Expression processing method, device, equipment, storage medium and product based on session Download PDF

Info

Publication number
CN118055091A
CN118055091A CN202211442660.6A CN202211442660A CN118055091A CN 118055091 A CN118055091 A CN 118055091A CN 202211442660 A CN202211442660 A CN 202211442660A CN 118055091 A CN118055091 A CN 118055091A
Authority
CN
China
Prior art keywords
expression
session
target
text
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211442660.6A
Other languages
Chinese (zh)
Inventor
陈春勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN118055091A publication Critical patent/CN118055091A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a method, a device, an electronic device, a computer readable storage medium and a computer program product for processing expressions based on a session, comprising the following steps: displaying a session message containing a first expression in a session interface; responding to the conversion operation aiming at the target content in the first expression, and converting the first expression in the conversation interface into a second expression; and the second expression is used for explaining the target content in the first expression. According to the method and the device, the first expression can be quickly converted into the second expression, the meaning of the first expression can be quickly and accurately understood through the second expression, and conversation efficiency is improved.

Description

Expression processing method, device, equipment, storage medium and product based on session
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for processing expressions based on a session, an electronic device, a computer readable storage medium, and a computer program product.
Background
Social applications are typically services that provide users with internet-based instant messaging, allowing two or more people to communicate text messages, files, voice and video over a network in real-time. As social applications have evolved, social applications have penetrated people's lives, with more and more people communicating using social applications.
In the process of conducting a conversation through a social application, after a user sends some expressions related to network expressions or metaphors, other users often cannot accurately understand the meaning of the expressions, resulting in low conversation efficiency.
Disclosure of Invention
The embodiment of the application provides an expression processing method, device, electronic equipment, computer readable storage medium and computer program product based on a conversation, which not only can rapidly convert a first expression into a second expression, but also can rapidly and accurately understand the meaning of the first expression through the second expression, thereby improving conversation efficiency.
The technical scheme of the embodiment of the application is realized as follows:
The embodiment of the application provides an expression processing method based on a session, which comprises the following steps:
displaying a session message containing a first expression in a session interface;
converting the first expression in the conversation interface into a second expression in response to a conversion operation for target content in the first expression;
The second expression is used for explaining the target content in the first expression.
The embodiment of the application provides an expression processing device based on a session, which comprises the following steps:
The display module is used for displaying the conversation message containing the first expression in the conversation interface;
the conversion module is used for responding to the conversion operation of the target content in the first expression and converting the first expression in the conversation interface into a second expression; the second expression is used for explaining the target content in the first expression.
In the above scheme, the conversion module is further configured to receive a smearing operation for the first expression;
In response to the smearing operation, covering the smeared content indicated by the smearing operation with a floating layer;
And determining the content covered by the floating layer as the target content, and determining the smearing operation as a conversion operation aiming at the target content.
In the above solution, the target content includes a first expression text, and the conversion module is further configured to convert, in response to a conversion operation for the target content in the first expression, the first expression in the session interface into a second expression, where the conversion module includes:
In response to a conversion operation for target content in the first expression, converting the first expression into a second expression composed of an image and a second expression text in the session interface;
The second expression text is used for explaining the first expression text, and the image corresponds to the second expression text.
In the above solution, the conversion module is further configured to display at least one style of candidate expression in response to a style transformation instruction for the second expression;
in response to a selection operation for a target candidate expression, a third expression having a style of the target candidate expression is generated and displayed in combination with the second expression and the target candidate expression.
In the above scheme, the conversion module is further configured to respond to an identifier adding operation for the third expression, and display an identifier adding interface corresponding to the third expression;
Based on the identification adding interface, responding to an adding instruction of a target object identification aiming at the current session object, and adding the target object identification in an association area of the third expression;
And responding to the determining instruction for the added target object identification, and generating the third expression carrying the object identification.
In the above scheme, the conversion module is further configured to display at least one object identifier of the current session object corresponding to a target identifier type in at least one identifier type in response to a trigger operation for the target identifier type;
And receiving an adding instruction aiming at the target object identifier in response to a selection operation aiming at the target object identifier in the at least one object identifier.
In the above scheme, the conversion module is further configured to display an expression generating control for the second expression, where the expression generating control is configured to generate, based on the second expression, a third expression having a style different from that of the second expression;
and responding to the triggering operation of the expression generating control, and receiving a style transformation instruction for the second expression.
In the above scheme, the conversion module is further configured to display an expression generating control for the first expression, where the expression generating control is configured to generate, based on the first expression, a fourth expression having a style different from that of the first expression;
responding to the triggering operation of the expression generating control, and displaying at least one style of candidate expression;
in response to a selection operation for the target candidate expression, generating and displaying a fourth expression having a style of the target candidate expression in combination with the target candidate expression and the first expression.
In the above scheme, the conversion module is further configured to display a collection control for the second expression, where the collection control is configured to add the second expression to an expression package of a current session object;
And responding to the triggering operation for the collection control, and adding the second expression to an expression package of the current conversation object.
In the above scheme, the target content includes a first expression text, and the target content is part of the content of the first expression, and the conversion module is further configured to display first interpretation information corresponding to the first expression text, where the first interpretation information is used to interpret the meaning of the first expression text.
In the above solution, the first interpretation information includes a web source of the first expression text, and the conversion module is further configured to display a view entry corresponding to the web source of the first expression text;
And responding to the triggering operation for the view portal, and displaying the network source of the first expression text.
In the above scheme, the conversion module is further configured to display second interpretation information corresponding to the first expression in response to an interpretation instruction for the first expression, where the second interpretation information is used to interpret the first expression.
In the above solution, the conversion module is further configured to display an object selection interface in response to a forwarding operation for the second expression, and display an expression pair including the second expression and the first expression in the object selection interface;
And transmitting the expression pair to the target object in response to the target object selected based on the object selection interface.
In the above scheme, the conversion module is further configured to send expression conversion prompt information to a sender of the session message;
the expression conversion prompt information comprises the second expression and is used for indicating a current conversation object to convert the first expression into the second expression.
In the above solution, the number of the first expressions is a plurality, the target content includes at least one of the plurality of first expressions, and the conversion module is further configured to respectively convert the at least one first expression included in the target content into a corresponding second expression in response to a conversion operation for the target content in the first expressions.
In the above solution, a plurality of continuous session messages including a fifth expression are displayed in the session interface, and the conversion module is further configured to sequentially convert each of the fifth expressions into a sixth expression according to a sending time sequence of each of the fifth expressions in response to a conversion operation for the plurality of continuous fifth expressions based on the plurality of session messages;
wherein the sixth expression is used for explaining the corresponding fifth expression.
In the above solution, the conversion module is further configured to obtain an expression interpretation model, and obtain an input of the expression interpretation model, where the input includes one of the following: the first expression, the target content and an analysis result obtained by performing content analysis on the first expression;
Performing paraphrasing prediction on the first expression based on the input through the expression interpretation model to obtain the paraphrasing of the first expression;
Based on the paraphrasing, the second expression is generated.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
And the processor is used for realizing the expression processing method based on the session, which is provided by the embodiment of the application, when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, cause the processor to perform the session-based expression processing method provided by the embodiments of the present application.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the electronic device executes the expression processing method based on the session provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
By applying the embodiment of the application, the session message containing the first expression is displayed in the session interface, and the second expression capable of explaining the target content in the first expression is directly displayed in the session interface by triggering the conversion operation on the target content in the first expression. Therefore, when the meaning of the first expression is uncertain, the expression conversion can be conveniently, quickly and intuitively carried out to obtain the second expression, and the meaning of the first expression can be quickly and accurately understood through the second expression, so that the conversation efficiency is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a session-based expression processing system 100 according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 implementing a session-based expression processing method according to an embodiment of the present application;
fig. 3 is a flow chart of a session-based expression processing method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a session interface provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an expression conversion operation according to an embodiment of the present application;
fig. 6 is a schematic diagram of a display manner of a second expression according to an embodiment of the present application;
fig. 7 is a schematic view of a painting operation according to an embodiment of the present application;
FIG. 8 is a schematic diagram of style conversion of a second expression according to an embodiment of the present application;
Fig. 9 is a schematic diagram of an expression generating control provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of the operation of an object identification control provided by an embodiment of the present application;
FIG. 11A is an object identification of different identification types provided by an embodiment of the present application;
FIG. 11B is a schematic diagram of a label adding result provided by an embodiment of the present application;
fig. 12 is a schematic diagram of expression forwarding provided by an embodiment of the present application;
FIG. 13 is a schematic view of an expression favorites provided by an embodiment of the present application;
fig. 14 is a schematic view showing explanatory information provided by the embodiment of the present application;
fig. 15 is a schematic diagram of expression forwarding provided by an embodiment of the present application;
FIGS. 16A-16B are schematic diagrams illustrating partial expression transitions provided by embodiments of the present application;
FIG. 17 is a schematic diagram of a plurality of continuous transition expressions provided by an embodiment of the present application;
fig. 18 is a flowchart of a second expression generating method according to an embodiment of the present application;
fig. 19 is a training method of an expression interpretation model according to an embodiment of the present application;
FIGS. 20A-20B illustrate processing for expressions according to embodiments of the present application;
FIG. 21 is a schematic view of a first expression collection provided by an embodiment of the present application;
fig. 22 is an explanatory information schematic diagram of a first expression provided in an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
It should be noted that, in the embodiments of the present application, data related to attributes of users are involved, when the embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and collection, use and processing of related data is required to comply with related laws and regulations and standards of related countries and regions.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Expression, after social application is active, a popular culture is formed to express specific emotion, such as emotion exhibited on the face or gesture of the user; in practical application, the expression can be divided into symbol expression, text expression, still image expression, dynamic image expression, video expression, etc., for example, the expression can use human faces expressing various emotions of users as materials, or uses popular stars, cartoon, video screenshot, etc. as materials, and then is matched with a series of matched characters, etc.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) Semantic Analysis (LSA): LSA is a method used in natural language processing that extracts "concepts" in documents and words by "vector semantic space" and analyzes the relationships between the documents and words. The basic assumption of LSA is that two words have semantically similarity if they appear multiple times in the same document. LSA uses a large number of texts to construct a matrix, one row of the matrix represents a word, one column represents a document, matrix elements represent the number of times the word appears in the document, singular Value Decomposition (SVD) is then used on the matrix to preserve column information, the number of rows of the matrix is then reduced, and then similarity of every two words can be indicated by cosine values of their row vectors (or vector dot product is used after normalization), and the closer the value to 1, the more similar the two words are, and the closer the value to 0, the more dissimilar the two words are. LSAs use word-document matrices to describe whether a word is in a document. Word-document matrix a sparse matrix, whose rows represent words and whose columns represent documents. Typically, the element of the word-document matrix is the number of occurrences of the word in the document, or tf-idf (term frequency-inverse document frequency) of the word. After the word-document matrix is constructed, the LSA will dimension down the matrix to find a low-order approximation of the word-document matrix. The result of dimension reduction is a merging of different words or because of their semantic relatedness, such as: { (car), (tree), (flower) } - > { (1.3452 car+0.2828 tree) }. Dimension reduction can solve a part of synonym problems and also can solve a part of ambiguity problems. Specifically, after the original word-document matrix is subjected to dimension reduction, the ambiguous part corresponding to the original word vector is added to the word similar to the semantic meaning of the original word vector, and the corresponding ambiguous component is reduced by the rest part.
Based on the above explanation of terms and expressions in the embodiments of the present application, the following describes a session-based expression processing system provided in the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic architecture diagram of a session-based expression processing system 100 according to an embodiment of the present application, in order to support an exemplary application, a terminal (a terminal 400-1 and a terminal 400-2 are shown in an exemplary manner) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
In some embodiments, the terminals (such as the terminal 400-1 and the terminal 400-2) are deployed with a social client (such as an instant messaging application client) for running the social client to display a session interface, and in the session interface, display a session message containing a first expression, wherein the semantics of the first expression are difficult to understand; and receiving a conversion operation based on the conversation interface for the target content in the first expression, forwarding an expression conversion request to the server, receiving a second expression corresponding to the first expression returned by the server, and displaying the second expression at the position of the first expression in the conversation interface, wherein the second expression is used for explaining the target content in the first expression.
In some embodiments, the server 200 is configured to receive an expression conversion request sent by a terminal, input a first expression into an expression interpretation model, and obtain first interpretation information corresponding to the first expression; and then converting the first expression with the hard-to-understand semantics into a second expression with the bluish-white semantics, and sending the second expression and the first interpretation information of the corresponding first expression to the terminal.
In practical applications, the server 200 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDNs, content Delivery Network), and basic cloud computing services such as big data and artificial intelligent platforms. Terminals (e.g., terminal 400-1 and terminal 400-2) may be, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart televisions, smart watches, etc. Terminals, such as terminal 400-1 and terminal 400-2, and server 200 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
An electronic device implementing the session-based expression processing method provided by the embodiment of the application is described next. Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 implementing a session-based expression processing method according to an embodiment of the present application. The electronic device 500 may be the server 200 shown in fig. 1, and the electronic device 500 may also be a terminal capable of implementing the session-based expression processing method provided by the present application, and taking the electronic device 500 as the server shown in fig. 1 as an example, the electronic device implementing the session-based expression processing method in the embodiment of the present application is described, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The Processor 510 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, such as a microprocessor or any conventional Processor, a digital signal Processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks; network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.; a presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530; the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the session-based expression processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows a session-based expression processing apparatus 555 stored in a memory 550, which may be software in the form of a program and a plug-in, and includes the following software modules: a display module 5551 and a conversion module 5552, which are logical, and thus may be arbitrarily combined or further split according to the implemented functions, the functions of each module will be described below.
In other embodiments, the session-based expression processing apparatus provided in the embodiments of the present application may be implemented by combining software and hardware, and by way of example, the session-based expression processing apparatus provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the session-based expression processing method provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more Application specific integrated circuits (ASICs, applications SPECIFIC INTEGRATED CIR cuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex Programmable logic devices (CPLDs, complex Programmable Logic Device), field Programmable Gate Arrays (FPGAs), field-Programmable GATE ARRAY), or other electronic components.
In some embodiments, the terminal or the server may implement the session-based expression processing method provided by the embodiment of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; the Application program can be a local (Native) Application program (APP), namely a program which can be installed in an operating system to run, such as an instant messaging APP and a web browser APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Based on the description of the session-based expression processing system and the electronic device provided by the embodiments of the present application, the following describes a session-based expression processing method provided by the embodiments of the present application. In practical implementation, the expression processing method based on the session provided by the embodiment of the application may be implemented by the terminal or the server alone, or implemented by the terminal and the server cooperatively, and the expression processing method based on the session provided by the embodiment of the application is illustrated by the terminal in fig. 1 alone. Referring to fig. 3, fig. 3 is a flowchart of a session-based expression processing method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
In step 101, the terminal displays a session message containing a first expression in a session interface.
In practical implementation, a social client, such as an instant messaging client, is arranged on the terminal, a session interface is presented through the social client, and after a session message is sent, the session message is presented in the session interface. The terminal is provided with an instant messaging application client, a session interface is presented through the instant messaging application client, and a user can communicate with other users through the session interface. During the user's communication with other users through the conversation interface, a conversation message including a first expression may be sent. The first expression may be composed of an image and a text, may be composed of an animation and a text, may be composed of a text of a special style, or may be composed of a single image. The first expression may be an independent expression or one expression in an expression package belonging to a certain style, and the expression package is specific to a plurality of expressions (i.e. an expression set) of a style. The first expression includes target content, and the meaning of the first expression is hard to understand or includes some semantically obscure information, such as harmonic stems, network hotwords, picture actions, letter abbreviations and the like, such as the letter abbreviations "YYDS" (forever god), "awsl" (o me death) and the like, and also such as harmonic stems "academic wall (" academic firm ")," dog bag "(" egg roll ") and the like. It should be noted that, the session information including the first expression may be sent by either the sending end or the receiving end.
Illustratively, referring to fig. 4, fig. 4 is a schematic diagram of a conversation interface provided by an embodiment of the present application, in which a conversation message including a first expression is presented, where reference numeral 1 shows the first expression composed of separate images; reference numeral 2 shows a first expression containing text and images, and a conversation message containing the first expression is presented in a conversation interface, wherein the conversation message is a conversation message of expression and text combination, the number of expressions is two, and one conversation message can have one or more first expressions.
In step 102, the terminal converts the first expression in the session interface into a second expression in response to the conversion operation for the target content in the first expression, where the second expression is used for explaining the target content in the first expression.
In practical implementation, the first expression includes target content, where the target content is often content that is difficult to understand by the conversation object, and the target content may be text in the first expression or an image in the first expression. In order to quickly understand the semantics of the first expression, the terminal can respond to the conversion operation aiming at the target content and can convert the first expression into the second expression, so that the conversation object quickly and accurately understand the semantics of the first expression based on the converted second expression. That is, the second expression is used to interpret the target content in the first expression. It should be noted that, when the expression conversion function is first online, the terminal may further display guide information, where the guide information is used to prompt the session object to trigger the conversion operation for the first expression.
For example, referring to fig. 5, fig. 5 is a schematic diagram of an expression conversion operation provided by an embodiment of the present application, first expression guide information (shown as reference number 2 in the figure) shown as reference number 1 is displayed in a conversation interface, and a conversation object is guided to perform a corresponding smearing operation on content which cannot be understood in the first expression, so as to obtain a bluish-white meaning of the first expression. The terminal triggers a conversion operation for the first expression in response to the application operation of the session object for the "learning wall," and converts the "learning wall" into a second expression (expression shown by number 3 in the figure) containing a more straight white "learning solid" in response to the conversion operation. Therefore, the conversation object can be helped to accurately manage the meaning of the first expression, and user experience is improved.
In some embodiments, the terminal may display the second expression by: the terminal displays a selection interface aiming at a display mode of the second expression; and responding to the selection operation of the target display function item in at least one display mode function item in the selection interface, and displaying a second expression by adopting the display mode indicated by the target display function item.
In actual implementation, the second expression obtained by conversion may include multiple display modes aiming at the second expression: the first expression may be displayed directly at the presentation position of the first expression, at which time the second expression directly replaces the first expression at the original presentation position in the conversation interface. In addition, in order to ensure that the first expression is not covered, the second expression is displayed at any position of the conversation interface under the condition that the first expression is not covered or is partially covered, that is, the first expression and the corresponding second expression are simultaneously presented in the conversation interface. In addition, a display time threshold of the second expression may be set, for example, the second expression is set to be displayed for 5 seconds, and vanishes after 5 seconds.
For example, referring to fig. 6, fig. 6 is a schematic diagram of a display manner of a second expression provided by an embodiment of the present application, where reference numeral 1 shows a selection function item of the display manner for the second expression, "alternate display," "simultaneous display," and "vanish after 5 seconds," where the "alternate display" is that the second expression (at the position of reference numeral 3 in the drawing) is displayed at the presentation position (at the position of reference numeral 2 in the drawing) of the first expression in the session interface, that is, the second expression is replaced by the first expression; the simultaneous display means that the second expression (shown as number 4 in the figure) is displayed at any position of the conversation interface, so that the second expression is ensured to be not covered or not covered with the first expression as much as possible; "vanishing after 5 seconds" means that the second expression vanishes after a preset time period is shown in the session interface (shown in number 5).
In some embodiments, the terminal may determine the conversion operation for the target content by: the terminal receives a smearing operation aiming at a first expression; in response to the painting operation, covering the content indicated by the painting operation with a floating layer; the content covered by the floating layer is determined as target content, and the painting operation is determined as a conversion operation for the target content.
In actual implementation, the terminal may receive a triggering operation (such as a smearing operation) of the user for the first expression, cover the content smeared by the smearing operation with a floating layer, use the covered content as a target content, and trigger a conversion operation for the target content in response to a completion instruction for the smearing operation. The specific technical implementation may be that after the terminal receives the smearing operation, the terminal detects the area range of the smearing area, and takes the content of the area range as the target content.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a painting operation provided in an embodiment of the present application, in which a user performs painting on a target content "learning wall" in a first expression (shown in reference number 1), and when the painting operation is completed (such as the painting operation is left for 5 seconds), a terminal responds to the painting operation and covers the target content with a floating layer corresponding to the painting operation.
In some embodiments, the target content in the first expression includes a first expression text, and the terminal may obtain the second expression by: the terminal responds to the conversion operation aiming at the target content in the first expression, and converts the first expression into a second expression consisting of an image and a second expression text in a session interface; the second emoji is used for explaining the first emoji, and the image corresponds to the second emoji.
In practical implementation, the target content in the first expression is the first expression text, the terminal converts the target content, the first expression text can be converted into a second expression text with more white semantics (namely, the second expression text can explain the first expression text), an image corresponding to the second expression text is determined, and the second expression is generated by combining the second expression text and the image associated with the second expression text. The specific function implementation can be that the terminal carries out image recognition on target content to obtain expression information of a first expression, the expression information can comprise information such as scenes, characters, emotion, actions, connotation and the like, then semantic analysis is carried out on the expression information through a semantic model to obtain a semantic straight white text with highest semantic similarity with a metaphor first expression text as a second expression text, and meanwhile, images matched with the semantics of the second expression text can be screened from an existing image library or the Internet based on an existing text image matching method; and then fusing the second expression text and the matched image to generate a second expression consisting of the image and the second expression text.
For example, referring to fig. 7, reference numeral 1 shows a first expression text "academic wall", reference numeral 2 shows an image content of the first expression, the terminal receives a conversion operation for the "academic wall", converts the "academic wall" into a second expression text "academic intensity", and synchronously converts the first image content into a semantically more white image content "pig intensity" (reference numeral 3 in fig. 5) associated with the "academic intensity", finally obtains the second expression, and displays the second expression at a presentation position of the first expression. Therefore, the user can conveniently and further intuitively understand the true meaning of the expression, and the user experience is improved.
In some embodiments, the terminal may further implement style transformation of the second expression by: the terminal responds to a style transformation instruction aiming at the second expression, and at least one style of candidate expression is displayed; in response to the selection operation for the target candidate expression, a third expression having a style of the target candidate expression is generated and displayed in combination with the second expression and the target candidate expression.
In practical implementation, after the terminal converts the first expression into a second expression easier to understand, style transformation for the second expression can also be implemented. The terminal responds to a style conversion instruction aiming at the second expression, and can display at least one style type of candidate expression through the floating layer; wherein, style types can include lovely pet wind, lovely wind, rich wind, strip wind, podophyllum wind, character wind, card ventilation, lovely baby wind, etc. In addition, expression packages of different style types can be displayed first, and after the target expression package is clicked, a plurality of candidate expressions of the style type indicated by the current target expression package are displayed. The terminal responds to the selection operation aiming at the target candidate expression, recognizes commonalities between the expression style of the target candidate expression and the expression style of the second expression, such as a text style, a picture action and the like, combines image recognition and image searching, acquires an image, a text or an expression package matched with the second expression semantic in the expression style of the target candidate expression, selects a material (text, image and the like) with the highest similarity as a target material, and finally fuses the target material with the second expression to generate a third expression with the style of the target candidate expression. Therefore, personalized customization aiming at the second expression can be realized, conversation efficiency and interestingness are improved, and communication barriers are reduced.
Referring to fig. 8, fig. 8 is a schematic diagram of style conversion of a second expression provided by the embodiment of the present application, in which, for the second expression shown in number 1 (converted from the first expression of "academic wall" in fig. 1), a plurality of candidate expressions of different styles (shown in number 2) are displayed in a floating layer manner in a session interface, a terminal detects a selection operation of a session object for a target candidate expression "rich wind" (shown in number 3), and a server will pull out an image or expression resource matched with "academic robust" in an expression package based on big data for screening, and select an expression style with highest relevance, such as "barbed rose", through combination of rose and text, so as to generate a new third expression of the same style. That is, the style "rich wind" corresponding to the target candidate expression is applied to the second expression, and the style conversion aiming at the second expression is realized to obtain the third expression.
Describing the manner in which the style transformation instructions are obtained, in some embodiments, the terminal may trigger the style transformation instructions by: the terminal displays an expression generating control aiming at the second expression, wherein the expression generating control is used for generating a third expression with a style different from the second expression based on the second expression; and responding to the triggering operation of the expression generating control, and receiving a style transformation instruction for the second expression.
In actual implementation, the terminal can also display an expression generating control, and respond to the triggering operation of the expression generating control to generate a style change instruction so as to generate a third expression with a style different from that of the second expression, namely, the expression with a new style is generated, and the true semantics of the second expression and the third expression are the same.
For example, referring to fig. 9, fig. 9 is a schematic diagram of an expression generating control provided by an embodiment of the present application, in the drawing, a converted second expression (shown in number 1) is displayed at a presentation position of a first expression, when a user presses the second expression to reach a duration threshold, at least one operation function item (shown in number 2 in the drawing) for the second expression is displayed in a floating layer, where "generate new expression" is an expression generating control, and a terminal receives a clicking operation for "generate new expression" and generates a style conversion instruction for the second expression.
In some embodiments, the terminal may further implement style transformation of the first expression by: the terminal displays an expression generating control for the first expression, wherein the expression generating control is used for generating a fourth expression with a style different from that of the first expression based on the first expression; responding to the triggering operation of the expression generating control, and displaying at least one style of candidate expression; in response to the selection operation for the target candidate expression, generating and displaying a fourth expression having the style of the target candidate expression in combination with the target candidate expression and the first expression.
In actual implementation, the terminal can also display an expression generating control, and respond to the triggering operation of the expression generating control to generate a style change instruction for the first expression so as to generate a fourth expression with a style different from the first expression, namely, a new style expression is generated, and the real semantics of the first expression and the fourth expression are the same.
In some embodiments, the terminal may further implement an identity adding function for the third expression by: the terminal responds to the identification adding operation aiming at the third expression, and an identification adding interface corresponding to the third expression is displayed; based on the identification adding interface, adding a target object identification in an association area of a third expression in response to an adding instruction of the target object identification for the current session object; and responding to the determining instruction for the added target object identification, and generating the third expression carrying the object identification.
In actual implementation, after generating the third expression with the target style, the terminal may further provide an adding function for adding the dedicated object identifier to the session object. The terminal responds to the triggering operation aiming at the identifier adding control, and can directly acquire the default object identifier of the session object; at least one identification type of object identification can be displayed for the session object to select the object identification meeting the self requirement. After the terminal determines the object identification of the session object, adding the object identification in the association area (such as the upper left corner, the upper right corner and the like) of the third expression, and finally fusing the object identification with the third expression to generate and display the third expression with the object identification; in this way, personalized expression generation can be provided.
For example, referring to fig. 10, fig. 10 is a schematic operation diagram of an object identifier control provided by an embodiment of the present application, in the figure, a third expression after style conversion, that is, "academic firm" of "rich and noble wind", shown by reference numeral 1, a control "add identifier" for adding an object identifier is displayed by a terminal in response to a long-press operation for the expression, and an identifier adding operation for the third expression may be triggered by the terminal in response to a click operation for "add identifier".
Describing the manner in which the add instruction for the object identifier is triggered, in some embodiments, at least one identifier type is displayed in the identifier adding interface, based on which the terminal receives the add instruction by: the terminal responds to the triggering operation aiming at the target identification type in the at least one identification type, and displays at least one object identification of the current session object corresponding to the target identification type; in response to a selection operation for a target object identification in the at least one object identification, an add instruction for the target object identification is received.
In actual implementation, the terminal may set at least one object identifier of an identifier type for the session object, where the identifier type may include an artistic signature of the session object, a virtual stamp, and the like, and when the target type is the artistic signature, the at least one style of artistic signature may be displayed; when the target type is a virtual stamp, virtual stamps of different fonts and different formats can be displayed.
For example, referring to fig. 11A, fig. 11A is an object identifier of different identifier types provided in an embodiment of the present application, where reference number 1 shows two identifier types: an artistic signature (number 1-1 shown) and a virtual stamp (number 1-2 shown). In the figure, the number 2 shows various different styles of artistic signatures, and in response to a selection operation for any one of the styles of artistic signatures in the figure, the style of artistic signature can be added in a third expression of the conversation message; the number 3 in the figure shows various different types of virtual stamps, and in response to a selection operation for any one virtual stamp in the figure, artistic signatures of talent style can be added in a third expression of the conversation message. Referring to fig. 11B, fig. 11B is a schematic diagram of an identifier adding result provided by the embodiment of the present application, and when adding the style of the artistic signature selected in fig. 11A for the third expression shown by the number 3 in the drawing, the object identifier shown by the number 2 in the drawing is finally synthesized; when the style of the virtual stamp selected in fig. 11A is added, the object identifier indicated by the number 3 in the final composite image.
It should be noted that, referring to fig. 10, the operation function items for the third expression may further include "forwarding," "sharing," "referencing," "collecting," and "generating a new expression" shown in reference to fig. 2, where "forwarding" refers to forwarding to other session objects, "referencing" refers to referencing the current third expression in the session, and "generating a new expression" refers to an expression style of the session object that may edit the third expression again, that is, triggering a style conversion instruction for the current third expression.
In some embodiments, the terminal may further implement a forwarding operation for the third expression by: the terminal responds to the forwarding operation aiming at the third expression and displays an object selection interface comprising at least one candidate session object; and forwarding the third expression to a session interface of the target session object in response to the selection operation for the target session object.
In actual implementation, after the terminal generates and displays the third expression, the terminal may further forward the third expression of the current semantic blushing to other session objects based on the received forwarding operation for the third expression.
For example, referring to fig. 12, fig. 12 is a schematic diagram of forwarding an expression provided in an embodiment of the present application, after the terminal generates and displays a third expression, the terminal responds to a pressing operation for the third expression, when a pressing duration of the pressing operation reaches a duration threshold (e.g., 2 seconds), at least one function item for the third expression, such as "forwarding," "collecting," "sharing," etc. (an operation function item shown in fig. 10), and the terminal responds to a selection operation for the target function item, and performs a target operation corresponding to the target function for the third expression. When the target function item is "forward", in response to a click operation for "forward", a selection interface of the session object shown by reference numeral 1 in the drawing is displayed, a target session object ("object 5") is selected, and then a third expression (third expression shown by reference numeral 1 in fig. 10) is forwarded to the session interface of the target session object.
Describing the collection operation for the second expression, in some embodiments, the terminal may also collect the second expression by: the terminal displays a collection control for the second expression, wherein the collection control is used for adding the second expression to an expression package of the current conversation object; and responding to the triggering operation for the collection control, and adding the second expression to the expression package of the current conversation object.
In actual implementation, the terminal may provide the session objects with a collection function for the second expression, each session object having an expression package dedicated to storing the collected expressions at the respective terminal (i.e., all expressions collected by the session object based on the collection control may be added to this expression set).
Illustratively, referring to FIG. 9, a "favorite" control for the second expression shown in FIG. 9 is displayed, and in response to a click operation for "favorite", the second expression is favorite to an expression favorites, which is an expression package dedicated to storing the favorite expression. Referring to fig. 13, fig. 13 is a schematic view of an expression favorite according to an embodiment of the present application, where the second expression shown by reference numeral 1 in fig. 13 is collected into the expression favorite shown by reference numeral 2 in the figure.
Describing the display manner of the interpretation information of the expression, in some embodiments, the target content includes a first expression text, and the target content is a part of the content of the first expression, based on which the terminal may further display the interpretation information by: the terminal displays first interpretation information corresponding to the first expression text, wherein the first interpretation information is used for interpreting the meaning of the first expression text.
In actual implementation, the target content of the first expression comprises a first expression text, and the terminal responds to the acquisition operation for the interpretation information, can acquire the interpretation information for interpreting the first expression text and displays the interpretation information through a floating layer; the terminal can also display related expression paraphrasing based on the second expression push through the floating layer after the second expression is acquired. That is, the interpretation information of the first expression text may be the interpretation information of the first expression text directly, or may be the interpretation information of the second expression text displayed after obtaining the more white second expression text. In addition, the terminal can play the first interpretation information in a voice playing mode, so that various crowds (such as blind people or people with weak eyesight) can conveniently view the first interpretation information.
For example, referring to fig. 14, fig. 14 is a schematic view showing the interpretation information provided by the embodiment of the present application, in which the terminal receives a "press" operation of a session object for a first expression, when the terminal detects that the press duration of the press operation reaches a preset duration threshold, the terminal identifies the interpretation information of the real meaning of the retrieved first expression based on big data and an image, and displays the interpretation information in a floating layer shown in fig. 1. Here, the setting of the time length threshold is described, and as the pressing operation for the first expression is performed, different pressing time length thresholds can display different operations for the first expression, for example, when the time length threshold is the first time length, the interpretation operation for the first expression can be triggered, and the interpretation information can be displayed; if the duration threshold is the second duration, the call may include at least one operation function item for the first expression, such as forwarding and sharing.
In some embodiments, the first interpretation message includes a web source of the first emoji text, and the terminal may further implement the following functions: the terminal displays a view entry corresponding to the network source of the first expression text; in response to a triggering operation for the view portal, a web source of the first emoji text is presented.
In practical implementation, the first interpretation information may include a network source of the first expression text, the terminal receives the real meaning and the network source of the first expression returned by the server, and may display the real meaning of the first expression and the network source of the first expression text in a form similar to encyclopedia, so as to provide an entry for conveniently viewing video or graphic information.
Illustratively, referring to fig. 14, a network source for the first expression text (a network source of "academic wall" shown by reference numeral 2) is displayed in the session interface through the floating layer, and a view entry of the network source (a view entry of "see" shown by reference numeral 3) may also be displayed.
In some embodiments, the terminal may also display interpretation information of the expression by: the terminal responds to the interpretation instruction aiming at the first expression, and displays second interpretation information corresponding to the first expression, wherein the second interpretation information is used for interpreting the first expression.
In practical implementation, the terminal detects that the conversation object presses the operation (such as a long press operation, an image drawing operation and the like) for the first expression, when an interpretation instruction for the first expression is generated, an interpretation information acquisition request is sent to the server based on the interpretation instruction, the (second) interpretation information for the first expression sent by the server is received, the (second) interpretation information is displayed in a conversation interface, in addition, the terminal can play the interpretation information in a voice playing mode, so that various crowds (such as blind people or visually impaired crowds) can conveniently view the interpretation information, and therefore the conversation object can view the interpretation information at any time.
Describing the forwarding operation for the second expression, in some embodiments, the terminal may forward the expression by: the terminal responds to the forwarding operation aiming at the second expression, displays an object selection interface, and displays an expression pair comprising the second expression and the first expression in the object selection interface; in response to the target object selected based on the object selection interface, the expression pairs are sent to the target object.
In practical implementation, after converting the first expression into the second expression, the terminal further provides a related expression forwarding function for the session object, where forwarding modes for the expressions may be independent forwarding and combined forwarding, and independent forwarding may further include independent forwarding of the first expression or independent forwarding of the second expression, and combined forwarding refers to simultaneous forwarding of the first expression and the second expression. And responding to the selection operation aiming at the target forwarding mode, and realizing expression forwarding. When the single forwarding first expression is selected, the terminal realizes the single forwarding for the first expression, and when the single forwarding second expression is selected, the terminal realizes the single forwarding for the second expression; and when the combined forwarding is selected, the terminal forwards the first expression and the second expression as expression pairs. The terminal displays an object selection interface, selects a target object from the object selection interface, and forwards the expression (at least one of the first expression and the second expression) determined by the target forwarding mode to the target session object.
For example, referring to fig. 15, fig. 15 is a schematic diagram of forwarding an expression provided by an embodiment of the present application, in which a forwarding function item "forwarding" is displayed in response to a pressing operation for a second expression, a session object selection list (shown as number 1 in the figure) is displayed in response to a clicking operation for "forwarding", a forwarding mode selection function interface (shown as number 2 in the figure) is displayed in response to a selection operation for a target session object ("object 5"), a forwarding mode function item (shown as number 3 in the figure) is displayed in the interface, and an expression pair consisting of a first expression and a second expression is forwarded to the selected target session object (a session interface shown as number 4 in the figure) in response to a selection operation for the "combined forwarding" function item.
In some embodiments, after the terminal converts the first expression in the session interface into the second expression, the following functions may be further implemented: the terminal sends expression conversion prompt information to a sender of the conversation message; the expression conversion prompt information comprises a second expression and is used for indicating the current conversation object to convert the first expression into the second expression.
In practical implementation, after the first expression is converted into the second expression, an expression conversion prompt message may also be sent to the sender of the first expression, "your 'school wall' expression has been converted, and please know.
Describing the conversion of the first expressions, in some embodiments, the number of the first expressions in one conversation message is a plurality of first expressions, and the target content includes at least one of the plurality of first expressions, the terminal may implement expression conversion by: and the terminal responds to the conversion operation aiming at the target content in the first expressions, and converts at least one first expression included in the target content into a corresponding second expression respectively.
In practical implementation, one conversation message may include a plurality of first expressions, and when a conversation object is smeared, part of the first expressions in the plurality of first expressions may be smeared, and the first expressions are respectively converted to obtain corresponding second expressions.
Fig. 16A-16B are schematic views of partial expression conversion provided by the embodiment of the present application, referring to fig. 16A, in which a session message shown in reference number 1 includes two first expressions, and a terminal receives a smearing operation of a session object for a first expression "learning wall" and triggers a conversion operation for the first expression, so as to obtain a second expression "learning firmness" shown in reference number 2 in the figure. Referring to fig. 16B, the terminal performs expression conversion on the two first expressions respectively in response to the smearing operation for all the first expressions in the session message shown by reference numeral 3, and obtains a corresponding second expression (shown by reference numeral 4 in the figure).
In some embodiments, a plurality of continuous session messages including the fifth expression are displayed in the session interface, and the terminal may further implement expression conversion by: the terminal responds to the conversion operation aiming at a plurality of continuous fifth expressions based on a plurality of session messages, and sequentially converts each fifth expression into a sixth expression according to the sending time sequence of each fifth expression; and the sixth expression is used for explaining the corresponding fifth expression.
In practical implementation, the conversation page may include a plurality of continuous conversation messages including a fifth expression, where the fifth expression is also an expression indicating that the semantics are not sufficiently white, so that in order to improve the simplicity of operation of the conversation object, the terminal supports continuous smearing operation of the conversation object on the plurality of fifth expressions, thereby triggering a conversion operation of the fifth expression on each smearing, and then sequentially performing expression conversion according to the sending time of each fifth expression, to obtain a sixth expression for explaining the corresponding fifth expression. It should be noted that each sixth expression obtained by conversion can be presented at the position of the corresponding fifth expression, or can be presented from top to bottom in the session interface according to the conversion time, and after a certain time of presentation, the sixth expression disappears, so that not only can the straight white semantic information be displayed for the session object to understand, but also the display space of the session page can be displayed.
For example, referring to fig. 17, fig. 17 is a schematic diagram of continuous conversion of multiple expressions provided by the embodiment of the present application, where two continuous conversation messages exist and each include a first expression (shown in fig. 1), and the terminal receives a smearing operation of a conversation object for the two continuous first expressions, triggers an expression conversion operation for the two first expressions, and obtains two converted second expressions shown in fig. 2. Therefore, simultaneous coating of a plurality of first expressions can be realized, and man-machine interaction experience is improved.
In actual implementation, when the terminal displays the first expression in the session interface for the first time, prompt information for explaining the first expression can be synchronously displayed so as to guide the session object to view the paraphrase information of the first expression according to the prompt information.
For example, referring to fig. 5, a conversation page in the figure includes a first expression, and meanwhile, a prompt message (indicated by a number 2 in the figure) for the expression is displayed in a following manner through a floating layer, the prompt message is "long press can view expression meaning", a conversation object can press the first expression for a long time, and after detecting the long press operation, the terminal displays explanation information for the first expression.
In some embodiments, referring to fig. 18, fig. 18 is a flowchart of a second expression generating method according to an embodiment of the present application, and steps 201 to 203 are shown in fig. 18 to illustrate a second expression generating process.
Step 201, the terminal obtains an expression interpretation model, and obtains input information of the expression interpretation model, where the input information includes one of the following: the method comprises the steps of first expression, target content and analysis results obtained by content analysis of the first expression.
In actual implementation, the terminal can acquire the interpretation information of the first expression through an expression interpretation model, and the model of the expression interpretation model is output as the interpretation information of the first expression; the model inputs include at least the following types: 1) The first expression, at this moment, the expression interpretation model at least comprises an image recognition layer, which is used for recognizing the first expression to determine the content and style attribute of the first expression; 2) The analysis result obtained by carrying out content on the first expression through image recognition and semantic analysis, such as text content; 3) Target content in the first expression.
And 202, performing interpretation prediction on the first expression based on the input information through an expression interpretation model to obtain the interpretation of the first expression.
In practical implementation, the expression interpretation model is trained by using a plurality of expressions of various different styles as training samples, wherein the label information of the training samples is a straight white and accurate interpretation of the expression of the sample. The terminal inputs a model of the first expression corresponding to the target type, inputs an expression interpretation model, and carries out interpretation prediction on the first expression to obtain interpretation of the first expression.
Illustratively, the input information is the target content "academic wall" in the first expression in fig. 4, and the explanation of "academic wall" is obtained through the expression interpretation model.
Step 203, generating a second expression according to the paraphrasing of the first expression.
In practical implementation, the terminal acquires text content with highest paraphrasing similarity with the first expression through semantic understanding according to the paraphrasing of the first expression, screens image content with high association degree with the paraphrasing semantic of the first expression from a massive image library, and generates a second expression by combining the text content and the image content, wherein the second expression is used for explaining the first expression, and the semantic of the second expression is more straight white and easier to understand relative to the semantic of the first expression.
In some embodiments, referring to fig. 19, fig. 19 is a training method of an expression interpretation model according to an embodiment of the present application, and the training process of the expression interpretation model is described in conjunction with the steps shown in fig. 19.
Step 301, a terminal acquires an expression interpretation model and a sample expression, wherein the sample expression carries tag information, and the tag information is interpretation information of the sample expression.
In actual implementation, sample expressions with insufficient semantics are selected from a massive expression library, and correct interpretation information of each sample expression is marked.
And step 302, performing interpretation prediction on the sample expression through an expression interpretation model to obtain prediction interpretation information of the sample expression.
In actual implementation, the terminal acquires a local expression interpretation model to be trained, and performs interpretation prediction on the sample expression to obtain prediction interpretation information of the sample expression.
Step 303, determining the difference between the predictive interpretation information and the label information, and updating the model parameters of the expression interpretation model based on the difference.
In actual implementation, the difference between the predictive interpretation information and the label information is determined, corresponding loss is determined based on a preset loss function, and model parameters of the expression interpretation model are updated according to the loss.
By applying the embodiment of the application, the meaning of the first expression is identified in response to the triggering operation aiming at the first expression, and the first expression is converted into the second expression which is more suitable for the understanding of the conversation object, so that when the conversation object does not determine the meaning of the first expression, the conversion can be conveniently, quickly and intuitively performed, and the meaning conveyed by the first expression can be more accurately understood based on the converted second expression. On the basis, the conversation object can also collect the converted second expression for subsequent self-use, and the second expression is straighter white in semantic meaning and easier to understand, so that the conversation object can be self-used after collection, and can be communicated more smoothly especially when being sent to the conversation object which is similar to the conversation object which is cognitively similar to the conversation object in certain aspects. Further, an explanation of the expression may also be provided so that the conversation object can be quickly viewed and understood. In addition, the style conversion operation aiming at the second expression can be supported, so that the conversation object can be personalized and customized on the basis of the straight white understanding aiming at the expression, the communication efficiency and the interestingness are improved, the communication obstacle among the conversation objects is reduced, and the man-machine interaction experience is improved.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In the related art, the existing network has a very large number of stems, and the update iteration is fast. Sometimes, the ordinary person cannot easily understand the internal meaning of the peduncles of some expression packages. And the expression packages associated with the keywords are too monotonous, so that a new expression package cannot be customized according to own preference.
Based on the above, the embodiment of the application provides a session-based expression processing method, which is to trigger a conversion operation for an original expression based on a smearing operation of the original expression (namely, the first expression in the previous process) to obtain a new expression with more white semantics (namely, the second expression in the previous process). Therefore, when the user does not determine the meaning of the original expression in the conversation message, the user can conveniently, quickly and intuitively convert the meaning conveyed by the original expression more accurately based on the converted new expression.
Firstly, describing a raw expression, which is an expression (hereinafter, collectively referred to as a first expression, which is the same as the first expression above) that is not sufficiently white in terms of semantics and cannot be understood by most people, the expression that is often difficult to understand is of the following types: harmonic stems, network hotwords, picture actions and English shorthand. For example, english shorthand is a favorite expression bag for many young people, and the middle-aged and elderly people are generally hard to understand because only the first letter of pinyin is selected, and the transmitted information amount is small. For example: YYDS (forever god) awsl (o i die). Similarly, harmonic stems are not well understood because they are just the same pinyin and are quite bad with physical objects, such as the wall of a school (firm of a school) and the pockets of dogs (egg rolls). The expression processing method provided by the embodiment of the application can obtain a new expression (hereinafter referred to as a second expression, which is the same as the second expression).
Next, the expression processing based on the conversation provided by the embodiment of the application is explained from the product survey and technology side. In practical applications, taking a conversation performed by an instant messaging client (hereinafter referred to as a client) as an example, most young people often send expressions composed of some network expressions during the conversation, and for users not concerned about the network expressions, such expressions are difficult to understand, and a second expression that is easy to understand needs to be obtained by converting a first expression that is difficult to understand. Fig. 20A-20B are views showing a processing manner for expressions according to an embodiment of the present application, and are described with reference to the steps shown in fig. 20A.
Step 501, the client displays a first expression in the session interface, receives a user's smearing operation for the first expression, triggers an expression conversion operation for the first expression, and starts the expression conversion operation for the first expression to generate a second expression that is easy to understand.
It should be noted that the second expression is obtained by performing expression conversion on the first expression, the second expression is the same as the semantic meaning expressed by the first expression, the semantic meaning of the second expression is more white and easier to understand than the semantic meaning of the first expression, but the use frequency of the first expression is higher in the conversation process relative to the second expression.
In actual implementation, the client receives a smearing operation aiming at the first expression, and triggers a conversion operation aiming at the first expression. And starting expression conversion operation aiming at the first expression, detecting the coordinate range of the coating area by the client, and returning the numerical coordinates of the coating area to the server. The server performs image recognition on the content of the smearing area to obtain expression information of the first expression, wherein the expression information can comprise information such as scenes, characters, emotions, actions, connotations and the like, and the expression information of the first expression is converted into text information with more visual semantics through semantic understanding.
For example, referring to fig. 5, a conversation interface is shown in the drawing, a first expression (number 1 in the drawing) which is difficult to understand by a user exists in the conversation interface, after the user smears the first expression, at a presentation position of the first expression, a second expression (number 3 in the drawing) is generated by a display server based on big data and image recognition, and the content which is difficult to understand in the first expression is replaced by text which is easy to be directly white and immediate in the second expression, for example, the user does not understand what meaning is "learning to decoy" the wall, and the system can directly translate the first expression into the meaning of directly white "learning to be strong" and is matched with a matching diagram corresponding to the first expression, so as to generate an expression package corresponding to the directly white. Based on the generated white expression package, relevant expression package definitions are pushed to a user, so that the user can conveniently and intuitively understand the real meaning behind the expression package and the video source. At the same time, the user may also choose to add a newly generated bluish-white second expression to the expression or generate a third expression with a new style based on the second expression.
Describing the image recognition mode for the first expression, the image recognition based on the AI is mainly realized through a convolutional neural network. An advantage of this neural network is that it exploits the principle of "strong correlation and strong similarity of neighboring pixels in the same image". Specifically, two adjacent pixels in an image are more correlated than two separate pixels in the image. The whole image recognition process comprises the steps of information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision. The 1) information acquisition means that information such as light or sound is converted into electric information by a sensor. That is, the basic information of the first expression is acquired and converted into information which can be recognized by a machine through a convolutional neural network method. 2) Preprocessing mainly refers to operations of denoising, smoothing, transformation and the like in image processing, so that important features of the expression package image are enhanced. 3) Feature extraction and selection means that in pattern recognition, feature extraction and selection are required. In the implementation process, the convolutional neural network is actually divided into two layers, namely a convolutional layer and a convergence layer. The convolution layer disperses the first emoji image into small blocks of pixels, one by one or 3*3/5*5, and then arranges the output values in a set of images, with the numerical axes representing height, width and color, respectively, representing the content of each region in the photograph. Then a three-dimensional numerical representation of each tile is obtained. The convergence layer combines the spatial dimensions of the three-dimensional image set with the sampling function to output a joint array that contains only relatively important parts of the image. The combined arrays can locate and identify information such as scenes, characters, moods, actions, connotations and the like in the expression package.
Describing the manner of performing semantic analysis on the first expression, the semantic analysis on the first expression can be implemented by using a semantic model based on natural language processing. The method comprises the steps of collecting a large number of first expressions with different style types as training samples, training a semantic model through the training samples, inputting the first expressions into the semantic model, and obtaining the correct meaning of the first expressions. In actual implementation, the input of the semantic model includes at least the following types: 1) The semantic model comprises at least one image recognition layer, wherein the image recognition layer is used for recognizing the first expression to determine the content and style attribute of the first expression; 2) Results after image recognition and semantic analysis, such as text content; 3) An image, video, or gif map resource associated based on the context associated with the big data.
Step 502, the server extracts an original first expression text of the first expression, determines a second expression text corresponding to the first expression based on semantic analysis, replaces the first expression text with the second expression text, and generates a second expression to be sent to the client.
It should be noted that, in the first expression, the original first expression text is hard to understand, and the second expression text has the same meaning as the first expression text, but the second expression text is easy to understand text after semantic analysis.
In actual practice, the trained semantic model of the present application can generally predict a simple, straight-white second emoji text of a first emoji of the following type. The types of common first expressions at least include: at least one of harmonic stems, network hotwords, picture actions and letter shorthand. The method comprises the steps of judging the relevance between a first expression belonging to the harmonic stem, for example, the words "the academic wall" can be identified by "the academic wall" and the pictures (shown as pictures) of the pan and the wall at the back are added, and obtaining an expression package which is the harmonic stem through the combination of image identification and semantic analysis. And then, through analysis of approximate words, correlating the related harmonic article with the image, generating a character and image library matched with the correlation through big data analysis correlation, wherein the library is generated by an NLP training model, and finally, screening and generating character contents matched with the actual image based on the high-low ordering of the matching degree. This content is the plain meaning of the expression package, which is convenient for the elderly or the ancestor to understand. Aiming at the first expression belonging to the network hotword, such as an expression package of 'after zero' of Chinese chives, the words of 'after 90' and 'Chinese chives' are identified, and the expression package which is the network hotword can be obtained based on the combination of the image identification and the semantic analysis, and the principle of follow-up identification is the same after 90 of Chinese chives are always cut in the meaning. For the first expression belonging to the picture action, such as an expression package of 'crowd of sister crowd crowding out of the scene', the expression package which is the picture action can be obtained based on the combination of the image recognition and the semantic analysis, and the meaning of 'welcome important character to the scene' is shown. For the first expression of letter shorthand, such as ' YYDS ' expression package, and based on the combination of the image recognition and the semantic analysis, the expression package of English shorthand can be obtained, and the meaning of ' forever ' represents a very serious meaning '.
In step 503, the client receives a style conversion operation for the first expression, displays a plurality of candidate styles through the floating layer, and sends the first expression and the target style to the server in response to a selection operation for the target style, so that the server generates a second expression having the target style.
In practical implementation, referring to fig. 21, fig. 21 is a schematic diagram of a first expression collection provided by the embodiment of the present application, when a user smears a first expression and clicks a button for generating a new expression, multiple candidate styles (recommended styles) are displayed through a floating layer, the user can select a favorite style according to the recommended styles and types (such as a ghost wind, a lovely wind and a lovely baby wind) as a target style, when a client detects that a user selects and generates a match, a server identifies commonalities in two first expressions and the target styles, such as a text style, a picture action and the like, and searches for a related content expression of an expression package through image identification and big data, matches materials with strongest relevance, recommends and replaces corresponding texts according to similarity and coincidence, and combines to generate the new expression.
Describing the image recognition classifier, the design of the image recognition classifier means that a recognition rule is obtained through training, and a characteristic classification can be obtained through the recognition rule, so that the image recognition technology can obtain high recognition rate. Thereby forming relevant labels and categories, and further classifying decisions and identifying style categories of the expression package. The server firstly carries out style classification on the styles of the first expression through the image recognition, and the styles mainly comprise the following categories: lovely, rich, diffuse, livestocks, characters, card ventilation, lovely baby, etc. The principle of determining the style is mainly to search the similar style of the whole network, and then based on the expression label attribute of the similar style, select the classification with the most common proportion, so as to orient the expression package style.
For example, referring to fig. 21, when the user selects the "generate new expression" function, the client may flick a floating layer, display the original first expression on the floating layer, and display the expression of different styles for the user to select. When a user selects one of the styles, the client automatically matches and generates a new expression of the style based on the style. The meaning of the first expression is "learning to decoct wall", the second expression is generated based on the meaning of the second expression is translated into the meaning of straight white "learning to be strong", and the third expression with the target style is generated by combining the second expression according to the style selected by the user. When the client detects that the user selects the rich and honour expression package, the server can pull out images or expression package resources matched with the 'academic firm' semantics in the rich and honour expression package based on big data to screen, select materials with highest relevance, such as 'thorn rose', and further generate a new expression package with the same style through combination of the rose and characters. If the user is satisfied with the newly generated second expression, the user can directly click the button 'add to expression', if the user is not satisfied with the newly generated second expression, the user can click 'change to' and the server can automatically match the next material with higher relevance based on the style, so that a third expression with the target style is generated until the user is satisfied with the third expression.
And step 504, displaying a third expression in the session interface of the client and displaying interpretation information corresponding to the third expression.
In practical implementation, after the server generates a new expression package, the expression package is returned to the client for display, and the source of the related stems of the expression package are pushed based on big data, so that a user can directly see the simplified expression package, and can further know the real meaning behind the expression package by looking at the annotation source. The literal expression package generated after the user smears is only visible by itself, similar to the translation of an english document, here the translation of the expression package, and is not visible by the sender. The expression package with the meaning of bluish white is generated by real-time coating, and the metaphor expression package is translated into the meaning of bluish white, so that a user can conveniently further understand the real meaning of the expression package.
Referring to fig. 22, fig. 22 is a schematic diagram of explanatory information of a first expression provided by an embodiment of the present application, in which a user presses a floating layer interface after an expression package for a long time, a client retrieves a real meaning and a network source of a current expression based on big data and image recognition, and informs the user of the real meaning behind the expression package in an encyclopedic-like manner, and if the expression package has the network source, an entry for conveniently viewing video or graphic information can be provided. It should be noted that, when the interface expression including the meaning function of fig. 4 is first online, the client may guide the user to check the meaning of the expression package according to the expression package.
In step 505, the client adds the third expression to the expression favorites in response to the collection operation for the third expression.
In actual implementation, the user can collect the expression package newly generated by the user according to the data recommended by the server.
For the above steps 501-505, which can be briefly summarized by 601-609 shown in fig. 20B, the corresponding process flow is as follows: 601, receiving expressions of other persons by the client; 602. the client side guides the user to paint the expression package which cannot be understood, so that a new expression can be generated; 603. the client detects the smearing area and returns to the server; 604. the server determines the expression and the content of the smearing area based on the image recognition; 605. generating expression package characters based on semantic analysis and character understanding recognition; 606. recombining the generated characters and expressions to generate new expressions to a server; 607. the server generates a new expression based on the model trained by natural language processing; 608. the client displays the newly generated expression resource and annotation; 609. the user selects an expression to add to the expression or to generate a new style.
By applying the embodiment of the application, the server recognizes the meaning of the expression package by detecting the triggering operation (such as smearing and practical limitation) of a user on a certain expression based on image recognition and big data, and accordingly generates the expression which is more suitable for the user to understand and presents the expression to the user, so that when the user does not determine the meaning of the expression sent by the opposite party, the user can conveniently, quickly and intuitively convert the expression, and the meaning which the opposite party wants to communicate through the expression can be more accurately understood based on the converted new expression (the communication generation gap problems are solved). On the basis, the user can collect the converted expression for subsequent self-use, and the converted expression is easier to understand by the user, so that the user can use the converted expression after collection, and particularly when the user sends chat objects similar to the user in some aspects of cognition, the user can communicate more smoothly. In addition, the user can customize the favorite expression package according to the favorite style on the basis of easy understanding. Thereby improving communication efficiency and interest and reducing communication disorder.
Continuing with the description below of an exemplary architecture of the session-based expression processing apparatus 555 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 3, the software module stored in the session-based expression processing apparatus 555 of the memory 550 may include: .
A display module 5551, configured to display a conversation message including a first expression in a conversation interface;
A conversion module 5552, configured to convert the first expression in the session interface into a second expression in response to a conversion operation for the target content in the first expression; the second expression is used for explaining the target content in the first expression.
In some embodiments, the conversion module is further configured to receive a painting operation for the first expression; in response to the smearing operation, covering the smeared content indicated by the smearing operation with a floating layer; and determining the content covered by the floating layer as the target content, and determining the smearing operation as a conversion operation aiming at the target content.
In some embodiments, the target content includes a first expression text, and the conversion module is further configured to convert, in the session interface, the first expression into a second expression composed of an image and a second expression text in response to a conversion operation for the target content in the first expression; the second expression text is used for explaining the first expression text, and the image corresponds to the second expression text.
In some embodiments, the conversion module is further configured to display at least one style of candidate expression in response to a style transformation instruction for the second expression; in response to a selection operation for a target candidate expression, a third expression having a style of the target candidate expression is generated and displayed in combination with the second expression and the target candidate expression.
In some embodiments, the conversion module is further configured to display an identifier adding interface corresponding to the third expression in response to an identifier adding operation for the third expression; based on the identification adding interface, responding to an adding instruction of a target object identification aiming at the current session object, and adding the target object identification in an association area of the third expression; and responding to the determining instruction for the added target object identification, and generating the third expression carrying the object identification.
In some embodiments, the conversion module is further configured to display at least one object identifier of the current session object corresponding to a target identifier type of at least one identifier type in response to a trigger operation for the target identifier type; and receiving an adding instruction aiming at the target object identifier in response to a selection operation aiming at the target object identifier in the at least one object identifier.
In some embodiments, the conversion module is further configured to display an expression generation control for the second expression, where the expression generation control is configured to generate, based on the second expression, a third expression having a style different from the second expression; and responding to the triggering operation of the expression generating control, and receiving a style transformation instruction for the second expression.
In some embodiments, the conversion module is further configured to display an expression generating control for the first expression, where the expression generating control is configured to generate, based on the first expression, a fourth expression having a style different from the first expression; responding to the triggering operation of the expression generating control, and displaying at least one style of candidate expression; in response to a selection operation for the target candidate expression, generating and displaying a fourth expression having a style of the target candidate expression in combination with the target candidate expression and the first expression.
In some embodiments, the conversion module is further configured to display a collection control for the second expression, the collection control being configured to add the second expression to an expression package of a current conversation object; and responding to the triggering operation for the collection control, and adding the second expression to an expression package of the current conversation object.
In some embodiments, the target content includes a first expression text, and the target content is a part of content of the first expression, and the conversion module is further configured to display first interpretation information corresponding to the first expression text, where the first interpretation information is used to interpret a meaning of the first expression text.
In some embodiments, the first interpretation information includes a web source of the first expression text, and the conversion module displays a view entry corresponding to the web source of the first expression text; and responding to the triggering operation for the view portal, and displaying the network source of the first expression text.
In some embodiments, the conversion module is further configured to display second interpretation information corresponding to the first expression in response to an interpretation instruction for the first expression, where the second interpretation information is used for interpreting the first expression.
In some embodiments, the conversion module is further configured to display an object selection interface in response to a forwarding operation for the second expression, and display an expression pair including the second expression and the first expression in the object selection interface; and transmitting the expression pair to the target object in response to the target object selected based on the object selection interface.
In some embodiments, the conversion module is further configured to send an expression conversion prompt message to a sender of the session message; the expression conversion prompt information comprises the second expression and is used for indicating a current conversation object to convert the first expression into the second expression.
In some embodiments, the number of the first expressions is a plurality, the target content includes at least one of the plurality of first expressions, and the conversion module is further configured to convert the at least one first expression included in the target content into a corresponding second expression in response to a conversion operation for the target content in the first expressions, respectively.
In some embodiments, a plurality of continuous session messages including a fifth expression are displayed in the session interface, and the conversion module is further configured to sequentially convert each of the fifth expressions into a sixth expression according to a sending time sequence of each of the fifth expressions in response to a conversion operation for the plurality of continuous fifth expressions based on the plurality of session messages; wherein the sixth expression is used for explaining the corresponding fifth expression.
In some embodiments, the session-based expression processing apparatus further comprises an expression interpretation model for obtaining input of the expression interpretation model, the input comprising one of: the first expression, the target content and an analysis result obtained by performing content analysis on the first expression; performing paraphrasing prediction on the first expression based on the input to obtain the paraphrasing of the first expression; based on the paraphrasing, the second expression is generated.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the expression processing method based on the session according to the embodiment of the application.
Embodiments of the present application provide a computer readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform a session-based expression processing method provided by embodiments of the present application, for example, a session-based expression processing method as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the application has the following beneficial effects: in response to the triggering operation aiming at the first expression, the meaning of the first expression is recognized, and the first expression is converted into a second expression which is more suitable for the understanding of the conversation object, so that when the conversation object does not determine the meaning of the first expression, the conversion can be conveniently, quickly and intuitively performed, and the meaning conveyed by the first expression can be more accurately understood based on the converted second expression. On the basis, the conversation object can also collect the converted second expression for subsequent self-use, and the second expression is straighter white in semantic meaning and easier to understand, so that the conversation object can be self-used after collection, and can be communicated more smoothly especially when being sent to the conversation object which is similar to the conversation object which is cognitively similar to the conversation object in certain aspects. Further, an explanation of the expression may also be provided so that the conversation object can be quickly viewed and understood. In addition, the style conversion operation aiming at the second expression can be supported, so that the conversation object can be personalized and customized on the basis of the straight white understanding aiming at the expression, the communication efficiency and the interestingness are improved, the communication obstacle among the conversation objects is reduced, and the man-machine interaction experience is improved.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (21)

1. A method for processing expressions based on a session, the method comprising:
displaying a session message containing a first expression in a session interface;
converting the first expression in the conversation interface into a second expression in response to a conversion operation for target content in the first expression;
The second expression is used for explaining the target content in the first expression.
2. The method of claim 1, wherein the method further comprises:
receiving a smearing operation for the first expression;
In response to the smearing operation, covering the smeared content indicated by the smearing operation with a floating layer;
And determining the content covered by the floating layer as the target content, and determining the smearing operation as a conversion operation aiming at the target content.
3. The method of claim 1, wherein the target content comprises a first emoji text;
The converting, in response to a conversion operation for the target content in the first expression, the first expression in the session interface into a second expression includes:
In response to a conversion operation for target content in the first expression, converting the first expression into a second expression composed of an image and a second expression text in the session interface;
The second expression text is used for explaining the first expression text, and the image corresponds to the second expression text.
4. The method of claim 1, wherein the method further comprises:
Displaying at least one style of candidate expression in response to a style transformation instruction for the second expression;
in response to a selection operation for a target candidate expression, a third expression having a style of the target candidate expression is generated and displayed in combination with the second expression and the target candidate expression.
5. The method of claim 4, wherein after the generating and displaying the third expression having the style of the target candidate expression, the method further comprises:
responding to the identification adding operation aiming at the third expression, and displaying an identification adding interface corresponding to the third expression;
Based on the identification adding interface, responding to an adding instruction of a target object identification aiming at the current session object, and adding the target object identification in an association area of the third expression;
And responding to the determining instruction for the added target object identification, and generating the third expression carrying the object identification.
6. The method of claim 5, wherein at least one identification type is displayed in the identification addition interface, the method further comprising:
responding to a triggering operation aiming at a target identification type in at least one identification type, and displaying at least one object identification of the current session object corresponding to the target identification type;
And receiving an adding instruction aiming at the target object identifier in response to a selection operation aiming at the target object identifier in the at least one object identifier.
7. The method of claim 4, wherein the method further comprises:
Displaying an expression generating control for the second expression, wherein the expression generating control is used for generating a third expression with a style different from the second expression based on the second expression;
and responding to the triggering operation of the expression generating control, and receiving a style transformation instruction for the second expression.
8. The method of claim 1, wherein the method further comprises:
displaying an expression generating control for the first expression, wherein the expression generating control is used for generating a fourth expression with a style different from that of the first expression based on the first expression;
responding to the triggering operation of the expression generating control, and displaying at least one style of candidate expression;
in response to a selection operation for the target candidate expression, generating and displaying a fourth expression having a style of the target candidate expression in combination with the target candidate expression and the first expression.
9. The method of claim 1, wherein the method further comprises:
Displaying a collection control for the second expression, wherein the collection control is used for adding the second expression to an expression package of a current conversation object;
And responding to the triggering operation for the collection control, and adding the second expression to an expression package of the current conversation object.
10. The method of claim 1, wherein the target content comprises a first emoji text and the target content is part of the first emoji, the method further comprising:
And displaying first interpretation information corresponding to the first expression text, wherein the first interpretation information is used for interpreting the meaning of the first expression text.
11. The method of claim 10, wherein the first interpretation message includes a web source of the first emoji text, the method further comprising:
displaying a view entry corresponding to a network source of the first expression text;
And responding to the triggering operation for the view portal, and displaying the network source of the first expression text.
12. The method of claim 1, wherein the method further comprises:
And responding to an interpretation instruction aiming at the first expression, displaying second interpretation information corresponding to the first expression, wherein the second interpretation information is used for interpreting the first expression.
13. The method of claim 1, wherein the method further comprises:
Responding to the forwarding operation aiming at the second expression, displaying an object selection interface, and displaying an expression pair comprising the second expression and the first expression in the object selection interface;
And transmitting the expression pair to the target object in response to the target object selected based on the object selection interface.
14. The method of claim 1, wherein after the converting the first expression in the conversation interface to a second expression, the method further comprises:
Sending expression conversion prompt information to a sender of the session message;
the expression conversion prompt information comprises the second expression and is used for indicating a current conversation object to convert the first expression into the second expression.
15. The method of claim 1, wherein the number of first expressions is a plurality, and the target content includes at least one of the plurality of first expressions;
The converting, in response to a conversion operation for the target content in the first expression, the first expression in the session interface into a second expression includes:
And respectively converting the at least one first expression included in the target content into corresponding second expressions in response to a conversion operation for the target content in the first expressions.
16. The method of claim 1, wherein a plurality of consecutive conversation messages including a fifth expression are displayed in the conversation interface, the method further comprising:
based on a plurality of session messages, responding to a conversion operation for a plurality of continuous fifth expressions, and sequentially converting each fifth expression into a sixth expression according to the sending time sequence of each fifth expression;
wherein the sixth expression is used for explaining the corresponding fifth expression.
17. The method of claim 1, wherein the method further comprises:
Acquiring an expression interpretation model, and acquiring input of the expression interpretation model, wherein the input comprises one of the following: the first expression, the target content and an analysis result obtained by performing content analysis on the first expression;
Performing paraphrasing prediction on the first expression based on the input through the expression interpretation model to obtain the paraphrasing of the first expression;
Based on the paraphrasing, the second expression is generated.
18. A session-based expression processing apparatus, the apparatus comprising:
The display module is used for displaying the conversation message containing the first expression in the conversation interface;
the conversion module is used for responding to the conversion operation of the target content in the first expression and converting the first expression in the conversation interface into a second expression; the second expression is used for explaining the target content in the first expression.
19. An electronic device, comprising:
a memory for storing executable instructions;
A processor for implementing the session-based expression processing method of any one of claims 1 to 17 when executing executable instructions stored in the memory.
20. A computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the session-based expression processing method of any one of claims 1 to 17.
21. A computer program product comprising a computer program or computer executable instructions which, when executed by a processor, implement the session-based expression processing method of any one of claims 1 to 17.
CN202211442660.6A 2022-11-16 Expression processing method, device, equipment, storage medium and product based on session Pending CN118055091A (en)

Publications (1)

Publication Number Publication Date
CN118055091A true CN118055091A (en) 2024-05-17

Family

ID=

Similar Documents

Publication Publication Date Title
US20240031688A1 (en) Enhancing tangible content on physical activity surface
CN105320428B (en) Method and apparatus for providing image
US20170103560A1 (en) Automated highlighting of identified text
KR102117433B1 (en) Interactive video generation
CN106484266A (en) A kind of text handling method and device
CN110554782B (en) Expression input image synthesis method and system
CN106909270A (en) Chat data input method, device and communicating terminal
CN107209775A (en) Method and apparatus for searching for image
Singh et al. Mobile Deep Learning with TensorFlow Lite, ML Kit and Flutter: Build scalable real-world projects to implement end-to-end neural networks on Android and iOS
US20220092071A1 (en) Integrated Dynamic Interface for Expression-Based Retrieval of Expressive Media Content
CN108304412A (en) A kind of cross-language search method and apparatus, a kind of device for cross-language search
CN114067797A (en) Voice control method, device, equipment and computer storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN117011875A (en) Method, device, equipment, medium and program product for generating multimedia page
CN112100501A (en) Information flow processing method and device and electronic equipment
Fischer et al. Brassau: automatic generation of graphical user interfaces for virtual assistants
US20220319082A1 (en) Generating modified user content that includes additional text content
CN118055091A (en) Expression processing method, device, equipment, storage medium and product based on session
CN111062207B (en) Expression image processing method and device, computer storage medium and electronic equipment
CN114283422A (en) Handwritten font generation method and device, electronic equipment and storage medium
KR20220136938A (en) Method and electronic device to provide sticker based on content input
CN110580486A (en) Data processing method and device, electronic equipment and readable medium
WO2022212669A1 (en) Determining classification recommendations for user content
CN113867875A (en) Method, device, equipment and storage medium for editing and displaying marked object
CN112287131A (en) Information interaction method and information interaction device

Legal Events

Date Code Title Description
PB01 Publication