CN112118359B - Text information processing method and device, storage medium, processor and electronic equipment - Google Patents

Text information processing method and device, storage medium, processor and electronic equipment Download PDF

Info

Publication number
CN112118359B
CN112118359B CN202011005074.6A CN202011005074A CN112118359B CN 112118359 B CN112118359 B CN 112118359B CN 202011005074 A CN202011005074 A CN 202011005074A CN 112118359 B CN112118359 B CN 112118359B
Authority
CN
China
Prior art keywords
text information
information
selection icon
display interface
multimedia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011005074.6A
Other languages
Chinese (zh)
Other versions
CN112118359A (en
Inventor
彭丁聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011005074.6A priority Critical patent/CN112118359B/en
Publication of CN112118359A publication Critical patent/CN112118359A/en
Application granted granted Critical
Publication of CN112118359B publication Critical patent/CN112118359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/313Selection or weighting of terms for indexing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • H04W4/14Short messaging services, e.g. short message services [SMS] or unstructured supplementary service data [USSD]

Abstract

The application provides a text information processing method and device, electronic equipment and an electronic system. The method comprises the following steps: receiving text information sent by a sending end; generating predetermined multimedia information according to the text information; displaying text information and a selection icon on a display interface; in response to a first predetermined operation acting on the selection icon, the display interface presents at least part of the predetermined multimedia information. In the method, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.

Description

Text information processing method and device, storage medium, processor and electronic equipment
Technical Field
The present application relates to the field of text information processing, and in particular, to a method and an apparatus for processing text information, a computer-readable storage medium, a processor, an electronic device, and an electronic system.
Background
The short message is a necessary application on a mobile phone, can only transmit text information, is simple and quick, but has poor visual experience and weak emotion interaction function.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The present application mainly aims to provide a text information processing method, device, computer-readable storage medium, processor, electronic device, and electronic system, so as to solve the problem in the prior art that the emotion interaction function of a short message is weak.
According to an aspect of the embodiments of the present invention, there is provided a method for processing text information, including: receiving text information sent by a sending end; generating predetermined multimedia information according to the text information, wherein the predetermined multimedia information is multimedia information except the text information; displaying the text information and the selection icon on a display interface; in response to a first predetermined operation acting on the selection icon, the display interface presents at least part of the predetermined multimedia information.
Optionally, generating predetermined multimedia information according to the text information includes: inputting the text information into a deep neural network model, wherein the deep neural network model is trained through machine learning by using multiple groups of data, and each group of data in the multiple groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information; and the deep neural network model processes the text information and outputs the preset multimedia information.
Optionally, inputting the text information into a deep neural network model, including: processing the text information to obtain subject information and/or text information; and inputting the subject information and/or the text information into the deep neural network model.
Optionally, processing the text information to obtain subject information and/or text information includes: decrypting and decompressing the text information; determining whether the decrypted and decompressed text information comprises a theme index number or not; and determining the theme information corresponding to the theme index number according to a preset mapping relation under the condition of comprising the theme index number.
Optionally, before inputting the text information into the deep neural network model, the processing method further includes: constructing the deep neural network model, wherein the constructing of the deep neural network model comprises the following steps: converting the training text information into word embedding vectors; inputting the word embedding vector into a coding neural network, and coding the word embedding vector to obtain an attention vector; inputting the attention vector into at least one multimedia content generation network to generate prepared predetermined multimedia content; and optimizing parameters of the coding neural network and the multimedia content generating network according to the prepared preset multimedia content and the training preset multimedia content to construct and obtain the deep neural network model.
Optionally, the multimedia content generating network comprises a convolutional image generating network, and the training predetermined multimedia content comprises a training predetermined image.
Optionally, displaying the text information and the selection icon on a display interface, including: displaying at least one selection icon in a first area of the display interface, wherein the at least one selection icon comprises an animation selection icon and/or an image selection icon; displaying the text information in a second area of the display interface; in response to a first predetermined operation acting on the animation selection icon and/or the image selection icon, the display interface presents at least part of the predetermined multimedia information, including: and playing animation corresponding to the text information in a third area of the display interface, and/or displaying an image corresponding to the text information in the third area of the display interface.
Optionally, the selection icon includes the image selection icon, and the image selection icon is a thumbnail.
According to another aspect of the embodiments of the present invention, there is also provided a method for processing text information, including: a sending end sends text information; the receiving end receives the text information and generates preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information; a display interface of the receiving end displays the text information and the selection icon; in response to a first preset operation acted on the selection icon, the display interface of the receiving end displays at least part of the preset multimedia information.
Optionally, before the sending end sends the text message, the method further includes: the sending end analyzes the text information by adopting a deep neural network model to generate the preset multimedia information, wherein the deep neural network model is trained by using multiple groups of data through machine learning, and each group of data in the multiple groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information; and the display interface of the sending end displays the preset multimedia information.
According to another aspect of the embodiments of the present invention, there is also provided a text information processing apparatus, including: the first receiving unit is used for receiving the text information sent by the sending end; the first generating unit is used for generating preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information; the first display unit is used for displaying the text information and the selection icon on a display interface; and the second display unit is used for responding to a first preset operation acted on the selection icon, and the display interface displays at least part of the preset multimedia information.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program executes any one of the methods.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes any one of the methods.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including: one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of processing textual information.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic system including: the sending end sends text information; and the receiving end is communicated with the sending end and is used for executing any text information processing method.
In the embodiment of the invention, firstly, the text information sent by the sending terminal is received, the predetermined multimedia information is generated according to the text information, the predetermined multimedia information is the multimedia information except the text information, then, the text information and the selection icon are displayed on the display interface, and then, in response to the first predetermined operation acted on the selection icon, at least part of the predetermined multimedia information can be displayed on the display interface. In the method, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 shows a flow diagram of a method of processing text information according to an embodiment of the application;
FIG. 2 is a schematic diagram illustrating a display effect of a display interface;
FIG. 3 shows a flow diagram of another method of processing text information according to an embodiment of the application;
fig. 4 is a schematic structural diagram of a text information processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another text information processing apparatus according to an embodiment of the present application;
FIG. 6 illustrates a workflow diagram of a transmitting end according to an embodiment of the application; and
fig. 7 shows a workflow diagram of a receiving end according to an embodiment of the application.
Wherein the figures include the following reference numerals:
10. displaying an interface; 20. selecting an icon; 30. a first region; 40. a second region; 50. and a third region.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As mentioned in the background, the emotion interaction function of the short message in the prior art is weak, and in order to solve the above problems, in an exemplary embodiment of the present application, a method and an apparatus for processing text information, a computer-readable storage medium, a processor, an electronic device, and an electronic system are provided.
According to an embodiment of the present application, there is provided a method of processing text information. Fig. 1 is a flowchart of a text information processing method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, receiving text information sent by a sending end;
step S102, generating predetermined multimedia information according to the text information, wherein the predetermined multimedia information is multimedia information except the text information;
step S103, displaying the text information and the selection icon on a display interface;
step S104, in response to a first predetermined operation acting on the selection icon, the display interface displays at least a part of the predetermined multimedia information.
In the method, firstly, the text information sent by the sending terminal is received, the preset multimedia information is generated according to the text information, the preset multimedia information is multimedia information except the text information, then, the text information and the selection icon are displayed on the display interface, and then, in response to the first preset operation acted on the selection icon, at least part of the preset multimedia information can be displayed on the display interface. In the method, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an embodiment of the present application, generating predetermined multimedia information according to the text information includes: inputting the text information into a deep neural network model, wherein the deep neural network model is trained by machine learning by using multiple groups of data, and each group of data in the multiple groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information; and the deep neural network model processes the text information and outputs the preset multimedia information. In this embodiment, the text information is input into the deep neural network model, the deep neural network model includes a plurality of sets of data, each set of data includes training text information and corresponding predetermined multimedia content for training, that is, the training text information and the predetermined multimedia content for training have a mapping relationship, for example, the training text information is "happy" and corresponding predetermined multimedia content for training of a smiling face, the training text information is "hard" and corresponding predetermined multimedia content for training of a crying face, and the text information is processed according to the deep neural network model, so that accurate predetermined multimedia information can be obtained.
In another embodiment of the present application, inputting the text information into the deep neural network model includes: processing the text information to obtain subject information and/or text information; and inputting the subject information and/or the text information into the deep neural network model. In the embodiment, the text information is processed through the deep neural network model, so that more accurate subject information and/or text information can be obtained, and then the subject information and/or the text information is input into the deep neural network model, so that more accurate preset multimedia information can be obtained.
In another embodiment of the present application, processing the text information to obtain subject information and/or text information includes: decrypting and decompressing the text information; determining whether the decrypted and decompressed text information comprises a theme index number; and determining the theme information corresponding to the theme index number according to a preset mapping relation under the condition of comprising the theme index number. In this embodiment, the received text information is decrypted and decompressed, so that the content of the text information can be recovered, and the topic index number and the body information can also be recovered.
In a specific embodiment, the text information further includes a topic index number list (corresponding to a predetermined mapping relationship), an index number, and text information, and the topic index number list may refer to table 1.
TABLE 1
Figure BDA0002695592640000051
Figure BDA0002695592640000061
For example, a text message includes a topic index number of (25, 1, 3, 11, 98, 32) and a text message of "text scenery good, where to take, time to see and walk! ".
In another specific embodiment, the receiving end stores the topic index number list, decrypts and decompresses the text information, determines whether the decrypted and decompressed text information includes a topic index number, determines whether the topic index number list of the receiving end includes a topic index number in the received text information under the condition that the topic index number is included, determines the topic information corresponding to the topic index number according to the topic index number list of the receiving end under the condition that the topic index number list of the receiving end includes the topic index number in the received text information, in this embodiment, decrypts and decompresses the received text information, can recover the content of the text information, and can recover the topic index number and the text information, under the condition that the topic index number list of the receiving end includes the topic index number in the received text information, and further accurately determining the topic information corresponding to the received topic index number according to the topic index number list of the receiving end.
In another embodiment of the present application, before inputting the text information into the deep neural network model, the processing method further includes: the constructing of the deep neural network model comprises the following steps: converting the training text information into word embedding vectors; inputting the word embedding vector into a coding neural network, and coding the word embedding vector to obtain an attention vector; inputting the attention vector into at least one multimedia content generation network to generate prepared predetermined multimedia content; and optimizing parameters of the coding neural network and the multimedia content generating network according to the prepared preset multimedia content and the training preset multimedia content, and constructing to obtain the deep neural network model. In the embodiment, an accurate deep neural network model can be obtained, and the text information can be processed through the obtained deep neural network model subsequently.
In a specific implementation manner, the network structure of the deep neural network model is a Transformer encoder + FCN Net1+ CNN Net2+ CNN Net3, the FCN Net1 and CNN Net2 are fixed parameters, the network parameters of CNN Net3 need to be obtained by training, a pre-trained text feature extractor of the google Transformer's self-attention network Base BERT can be selected, partial parameters of the encoder are frozen, and a self-attention vector output by the encoding layer is used as an input of the multimedia content generation network Net1, which specifically includes the following steps: the training text information may include training topic informationAnd training text information, converting the training subject information into M training subject word embedding vectors V1 with 1 × L dimension in a word vector mode, converting the training text information into N training text word embedding vectors V2 with 1 × L dimension in a word vector mode, wherein M represents the number of training subject words, N represents the number of training text words, M V1 and N V2 are superposed into input in the form of (M + N) L matrix according to lines and are recorded as V, the V serves as an input value of a coding neural network and is input into the coding neural network, the coding neural network processes line by line, the word embedding vector corresponding to V is coded and is coded into an attention vector with 1 × S dimension, the attention vector serves as an input value and is input into a first multimedia content generating network, the first multimedia content generating network outputs an image with the size of W/32 through convolution and up-sampling operation, intermediate image I with width H/32 and channel number D16net1The parameter to be trained is recorded as tensor Wnet1The dimension of the material is [ W32, W32, D16, S],Inet1The second multimedia content generation network outputs an intermediate image I with the image size of W/8 length, H/8 width and D4 channel number through transposition convolution and up-sampling operation as input of the second multimedia content generation networknet2The parameter to be trained is recorded as tensor Wnet2The dimension of the material is [5, 5, D4, D16 ]],Inet2The third multimedia content generation network outputs prepared multimedia contents with the image size of W/8 length, H/8 width and D4 channel number through transposition convolution and up-sampling operation as the input of the third multimedia content generation network, and the parameters to be trained are recorded as tensor Wnet3The dimension of the material is [5, 5, D, D4 ]]Optimizing parameters of the encoding neural network and the multimedia content generating network according to the prepared predetermined multimedia content and the training predetermined multimedia content, namely calculating the difference value of the prepared predetermined multimedia content and the training predetermined multimedia content, selecting the square sum of all pixel difference values as an optimization target, and performing back propagation algorithm on W through the neural networknet1、Wnet2And Wnet3Performing iterative solution until the iteration times are enough or the optimization objective function is not reduced, and mixing the prepared preset multimedia content with the training preset multimedia contentThe content of (2) is optimized to obtain a deep neural network model, where W is H is 512, D is 3, S may be selected from {64, 128, 256}, and S represents and encodes the second dimension of the attention vector, or may be any other feasible number, for example, a power of 2 may be selected.
In yet another embodiment of the present application, the predetermined multimedia information includes an image, the multimedia content generating network includes a convolutional image generating network, and the training predetermined multimedia content includes a training predetermined image. And an image corresponding to the text information can be obtained more accurately through the convolution image generation network.
In still another embodiment of the present application, as shown in fig. 2, the displaying the text information and the selection icon 20 on the display interface 10 includes: displaying at least one of the selection icons 20 in a first area 30 of the display interface 10, wherein at least one of the selection icons 20 includes an animation selection icon and/or an image selection icon; displaying the text information in a second area 40 of the display interface 10; in response to a first predetermined operation acting on the animation selection icon 20 and/or the image selection icon 20, the display interface 10 presents at least part of the predetermined multimedia information, including: playing an animation corresponding to the text information in the third area 50 of the display interface 10, and/or displaying an image corresponding to the text information in the third area 50 of the display interface 10. In this embodiment, at least one selection icon 20 is displayed in the first area 30, text information is displayed in the second area 40, and animation and/or images are displayed in the third area 50, so that a better display effect can be achieved, the experience effect of the user is further improved, and the emotion interaction function of the short message is further enhanced.
It should be noted that, a video or a voice corresponding to the text information may also be played, and other predetermined multimedia information may also be played, and those skilled in the art may select appropriate predetermined multimedia information according to the actual situation.
In another embodiment of the present application, the selection icon includes the image selection icon, and the image selection icon is a thumbnail. In the embodiment, the image selection icons are thumbnails, the display effect is more visual, and the experience effect of the user is further improved.
In practical applications, of course, the method is not limited to the above-mentioned method, and other methods capable of representing the image selection icon may also be used, for example, the image selection icon may be represented by a highlighted icon, or the image selection image may be represented by an effect of artistic words, of course, other methods may also be used to represent the image selection icon, and a person skilled in the art may select an appropriate method to represent the image selection icon according to actual situations.
According to an embodiment of the present application, another method for processing text information is provided. Fig. 3 is a flowchart of a text information processing method according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
step S201, a sending end sends text information;
step S202, a receiving end receives the text information and generates preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information;
step S203, the display interface of the receiving end displays the text information and the selection icon;
step S204, in response to a first predetermined operation acting on the selection icon, a display interface of the receiving end displays at least a part of the predetermined multimedia information.
In the method, firstly, a sending end sends text information, a receiving end receives the text information sent by the sending end, predetermined multimedia information is generated according to the text information, the predetermined multimedia information is multimedia information except the text information, then, the text information and a selection icon are displayed on a display interface of the receiving end, and then, in response to a first predetermined operation acted on the selection icon, the display interface of the receiving end can display at least part of the predetermined multimedia information. In the method, the receiving end can generate the preset multimedia information by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, the receiving end can display at least part of the preset multimedia information on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
In an embodiment of the present application, before the sending end sends the text message, the method further includes: the sending end analyzes the text information by adopting a deep neural network model to generate the predetermined multimedia information, wherein the deep neural network model is trained by using a plurality of groups of data through machine learning, and each group of data in the plurality of groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information; and the display interface of the sending end displays the preset multimedia information. In this embodiment, the sending end analyzes the text information by using the deep neural network model, and inputs the text information into the deep neural network model, where the deep neural network model includes multiple sets of data, each set of data includes training text information and corresponding predetermined multimedia contents for training, that is, the training text information and the predetermined multimedia contents for training have a mapping relationship, for example, the training text information is "happy" and corresponding predetermined multimedia contents for training of a smiling face, and the training text information is "hard" and corresponding predetermined multimedia contents for training of a crying face, and the text information is analyzed according to the deep neural network model, so as to obtain accurate predetermined multimedia information.
In another embodiment of the present application, the sending end sends the text message to the receiving end, including: the sending terminal displays an information editing interface; and in response to a second preset operation acted on the information editing interface, the sending end generates the text information and sends the text information to the receiving end. In this embodiment, accurate text information may be generated at the sending end by responding to the second predetermined operation acting on the information editing interface, and then the text information may be sent to the receiving end, and the subsequent receiving end may receive the accurate text information.
In another embodiment of the present application, in response to a second predetermined operation applied to the information editing interface, the sending end generates the text information, which includes at least one of: the sending terminal displays an information topic list, and in response to a first sub-scheduled operation acting on the information topic list, the sending terminal generates the text information including topic information; the sending end displays a text editing window, and responds to a second sub-preset operation for editing the text information to generate the text information comprising the text information; sending the text message to the receiving end, including: the sending end encrypts and compresses the text information comprising the subject information and/or the text information to generate an encrypted file; and the sending end sends the encrypted file to the receiving end. In the embodiment, the information theme list can be displayed at the sending end, the user can select the information theme, the accurate information theme selected by the user can be obtained by responding to the first sub-predetermined operation, the more accurate text information can be obtained by responding to the second sub-predetermined operation, the sending end encrypts and compresses the text information, the safe encrypted file can be obtained, and the safety of the text information is ensured.
The embodiment of the present application further provides a text information processing apparatus, and it should be noted that the text information processing apparatus according to the embodiment of the present application may be used to execute the processing method for text information provided in the embodiment of the present application. The following describes a text information processing apparatus according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a text information processing apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
a first receiving unit 100, configured to receive text information sent by a sending end;
a first generating unit 200, configured to generate predetermined multimedia information according to the text information, where the predetermined multimedia information is multimedia information other than the text information;
a first display unit 300 for displaying the text information and the selection icon on a display interface;
a second display unit 400, configured to respond to a first predetermined operation acting on the selection icon, where the display interface displays at least a part of the predetermined multimedia information.
In the device, the first receiving unit receives text information sent by the sending terminal, the first generating unit generates predetermined multimedia information according to the text information, the predetermined multimedia information is multimedia information except the text information, the first display unit displays the text information and the selection icon on the display interface, and the second display unit responds to a first predetermined operation acted on the selection icon, so that at least part of the predetermined multimedia information can be displayed on the display interface. In the device, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
In an embodiment of the present application, the first generating unit includes a first input module and an output module, the first input module is configured to input the text information into a deep neural network model, wherein the deep neural network model is trained through machine learning by using multiple sets of data, and each set of data in the multiple sets of data includes: training text information and training predetermined multimedia content corresponding to the training text information; the output module is used for processing the text information and outputting the preset multimedia information. In this embodiment, the text information is input into the deep neural network model, the deep neural network model includes a plurality of sets of data, each set of data includes training text information and corresponding predetermined multimedia content for training, that is, the training text information and the predetermined multimedia content for training have a mapping relationship, for example, the training text information is "happy" and corresponding predetermined multimedia content for training of a smiling face, the training text information is "hard" and corresponding predetermined multimedia content for training of a crying face, and the text information is processed according to the deep neural network model, so that accurate predetermined multimedia information can be obtained.
In another embodiment of the present application, the first input module includes a processing sub-module and an input sub-module, and the processing sub-module is configured to process the text information to obtain subject information and/or text information; and the input submodule is used for inputting the theme information and/or the text information into the deep neural network model. In the embodiment, the text information is processed through the deep neural network model, so that more accurate subject information and/or text information can be obtained, and then the subject information and/or the text information is input into the deep neural network model, so that more accurate preset multimedia information can be obtained.
In another embodiment of the present application, the processing sub-module is further configured to decrypt and decompress the text information; the processing submodule is also used for determining whether the decrypted and decompressed text information comprises a theme index number; the processing submodule is further used for determining the theme information corresponding to the theme index number according to a preset mapping relation under the condition that the theme index number is included. In this embodiment, the received text information is decrypted and decompressed, so that the content of the text information can be recovered, and the topic index number and the body information can also be recovered.
In a specific embodiment, the text information further includes a topic index number list (corresponding to a predetermined mapping relationship), an index number, and text information, and the topic index number list may refer to table 1 above.
For example, a text message includes a topic index number of (25, 1, 3, 11, 98, 32) and a text message of "text scenery good, where to take, time to see and walk! ".
In another specific embodiment, the receiving end stores the topic index number list, the processing submodule is configured to decrypt and decompress the text information, the processing submodule is configured to determine whether the decrypted and decompressed text information includes a topic index number, the processing submodule is configured to determine whether the topic index number list of the receiving end includes a topic index number in the received text information under the condition that the topic index number includes the topic index number in the received text information, and determine topic information corresponding to the topic index number according to the topic index number list of the receiving end under the condition that the topic index number list of the receiving end includes the topic index number in the received text information, in this embodiment, the received text information is decrypted and decompressed, content of the text information can be restored, the topic index number and the text information can also be restored, and under the condition that the topic index number list of the receiving end includes the topic index number in the received text information, and further accurately determining the topic information corresponding to the received topic index number according to the topic index number list of the receiving end.
In another embodiment of the present application, the apparatus further includes a building unit, where the building unit is configured to build the deep neural network model before inputting the text information into the deep neural network model, and the building unit includes a conversion module, a second input module, a third input module, and an optimization module, and the conversion module is configured to convert the training text information into a word embedding vector; the second input module is used for inputting the word embedding vector into a coding neural network, and coding the word embedding vector to obtain an attention vector; the third input module is used for inputting the attention vector into at least one multimedia content generation network to generate prepared preset multimedia content; the optimization module is used for optimizing the parameters of the coding neural network and the multimedia content generation network according to the prepared preset multimedia content and the training preset multimedia content, and constructing and obtaining the deep neural network model. In the embodiment, an accurate deep neural network model can be obtained, and the text information can be processed through the obtained deep neural network model subsequently.
In a specific implementation manner, the network structure of the deep neural network model is a Transformer encoder + FCN Net1+ CNN Net2+ CNN Net3, the FCN Net1 and CNN Net2 are fixed parameters, the network parameters of CNN Net3 need to be obtained by training, a pre-trained text feature extractor of the google Transformer's self-attention network Base BERT can be selected, partial parameters of the encoder are frozen, and a self-attention vector output by the encoding layer is used as an input of the multimedia content generation network Net1, which specifically includes the following steps: the training text information can comprise training subject information and training text information, the training subject information is converted into M training subject word embedding vectors V1 with 1 × L dimension in a word vector mode, the training text information is converted into N training positive word embedding vectors V2 with 1 × L dimension in a word vector mode, wherein M represents the number of training subject words, N represents the number of training positive words, M V1 and N V2 are superposed into input in an (M + N) × L matrix form, and are marked as V and used as input values of the coding neural network, the coding neural network is input into the coding neural network for line-by-line processing, the word embedding vectors corresponding to V are coded and coded into attention vectors with 1 × S dimension, the attention vectors are input into the first multimedia content generating network as input values, and the first multimedia content generating network is subjected to convolution and up-sampling operation, intermediate image I with output image size of W/32 length, H/32 width and channel number of D16net1The parameter to be trained is recorded as tensor Wnet1The dimension of the material is [ W32, W32, D16, S],Inet1The second multimedia content generation network outputs an intermediate image I with the image size of W/8 length, H/8 width and D4 channel number through transposition convolution and up-sampling operation as input of the second multimedia content generation networknet2The parameter to be trained is recorded as tensor Wnet2The dimension of the material is [5, 5, D4, D16 ]],Inet2As input to a third multimedia content generation network, the third multimedia content generation network is convolved by transposition and upsamplingSample operation is carried out, prepared multimedia contents with the image size of W/8, H/8 and the channel number D4 are output, and parameters to be trained are recorded as tensor Wnet3The dimension of the material is [5, 5, D, D4 ]]Optimizing parameters of the encoding neural network and the multimedia content generating network according to the prepared predetermined multimedia content and the training predetermined multimedia content, namely calculating the difference value of the prepared predetermined multimedia content and the training predetermined multimedia content, selecting the square sum of all pixel difference values as an optimization target, and performing back propagation algorithm on W through the neural networknet1、Wnet2And Wnet3And performing iterative solution until the iteration number is enough or the optimization objective function is not reduced any more, optimizing the prepared predetermined multimedia content and the content for training the predetermined multimedia content to obtain a deep neural network model, wherein W is H is 512, D is 3, S can be selected from {64, 128, 256}, and S represents and codes the second dimension of the attention vector, and can also be any other feasible number, for example, a power of 2 can be selected.
In yet another embodiment of the present application, the predetermined multimedia information includes an image, the multimedia content generating network includes a convolutional image generating network, and the training predetermined multimedia content includes a training predetermined image. And an image corresponding to the text information can be obtained more accurately through the convolution image generation network.
In still another embodiment of the present application, as shown in fig. 2, the first display unit includes a first display module and a second display module, the first display module is configured to display at least one of the selection icons 20 in the first area 30 of the display interface 10, and at least one of the selection icons 20 includes an animation selection icon and/or an image selection icon; the second display module is configured to display the text message in a second area 40 of the display interface 10; the second display unit includes a third display module, and the third display module is configured to play an animation corresponding to the text message in the third area 50 of the display interface 10, and/or display an image corresponding to the text message in the third area 50 of the display interface 10. In this embodiment, at least one selection icon 20 is displayed in the first area 30, text information is displayed in the second area 40, and animation and/or images are displayed in the third area 50, so that a better display effect can be achieved, the experience effect of the user is further improved, and the emotion interaction function of the short message is further enhanced.
It should be noted that, a video or a voice corresponding to the text information may also be played, and other predetermined multimedia information may also be played, and those skilled in the art may select appropriate predetermined multimedia information according to the actual situation.
In another embodiment of the present application, the selection icon includes the image selection icon, and the image selection icon is a thumbnail. In the embodiment, the image selection icons are thumbnails, the display effect is more visual, and the experience effect of the user is further improved.
In practical applications, of course, the method is not limited to the above-mentioned method, and other methods capable of representing the image selection icon may also be used, for example, the image selection icon may be represented by a highlighted icon, or the image selection image may be represented by an effect of artistic words, of course, other methods may also be used to represent the image selection icon, and a person skilled in the art may select an appropriate method to represent the image selection icon according to actual situations.
According to an embodiment of the present application, there is provided another processing apparatus of text information. Fig. 5 is a schematic diagram of a text information processing apparatus according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a first transmitting unit 500 for transmitting text information;
a second receiving unit 600, configured to receive the text message at a receiving end, and generate predetermined multimedia information according to the text message, where the predetermined multimedia information is multimedia information other than the text message;
a third display unit 700 for displaying the text information and the selection icon on a display interface;
a fourth display unit 800, configured to respond to a first predetermined operation acting on the selection icon, where the display interface displays at least a part of the predetermined multimedia information.
In the device, the first sending unit sends text information, the second receiving unit receives the text information sent by the sending unit and generates preset multimedia information according to the text information, the preset multimedia information is multimedia information except the text information, the third display unit displays the text information and the selection icon on the display interface of the receiving end, and the fourth display unit responds to the first preset operation acted on the selection icon, and the display interface can display at least part of the preset multimedia information. In the device, the preset multimedia information can be generated by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
In an embodiment of the present application, the apparatus further includes a second generating unit and a fifth displaying unit, the second generating unit is configured to analyze the text information by using a deep neural network model before sending the text information to generate the predetermined multimedia information, where the deep neural network model is trained by machine learning using multiple sets of data, and each set of data in the multiple sets of data includes: training text information and training predetermined multimedia content corresponding to the training text information; and the fifth display unit is used for displaying the preset multimedia information on a display interface. In this embodiment, the second generating unit analyzes the text information by using the deep neural network model, and inputs the text information into the deep neural network model, where the deep neural network model includes multiple sets of data, each set of data includes training text information and corresponding predetermined multimedia content, that is, the training text information and the predetermined multimedia content have a mapping relationship, for example, the training text information is "happy" and corresponding predetermined multimedia content for a smiling face, and the training text information is "hard" and corresponding predetermined multimedia content for a crying face, and the text information is analyzed according to the deep neural network model, so as to obtain accurate predetermined multimedia information.
In another embodiment of the present application, the first sending unit includes a fourth display module and a generating module, and the fourth display module is configured to display an information editing interface; the generating module is used for responding to a second preset operation acted on the information editing interface, generating the text information and sending the text information to the receiving end. In this embodiment, accurate text information may be generated at the sending end by responding to the second predetermined operation acting on the information editing interface, and then the text information may be sent to the receiving end, and the subsequent receiving end may receive the accurate text information.
In yet another embodiment of the present application, the generating module includes a first generating sub-module and a second generating sub-module, the first generating sub-module is configured to display an information topic list, and generate the text information including topic information in response to a first sub-predetermined operation applied to the information topic list; the second generation submodule is used for displaying a text editing window and responding to a second sub-preset operation for editing the text information to generate the text information comprising the text information; the generating module comprises a third generating submodule and a sending submodule, and the third generating submodule is used for encrypting and compressing text information comprising the subject information and/or the text information to generate an encrypted file; the sending submodule is used for sending the encrypted file to the receiving terminal. In the embodiment, the information theme list can be displayed at the sending end, the user can select the information theme, the accurate information theme selected by the user can be obtained by responding to the first sub-predetermined operation, the more accurate text information can be obtained by responding to the second sub-predetermined operation, the sending end encrypts and compresses the text information, the safe encrypted file can be obtained, and the safety of the text information is ensured.
There is also provided, in accordance with an embodiment of the present application, electronic equipment including one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including processing methods for executing any one of the above-described text messages.
In the electronic device, the processor and the memory are arranged, so that the text information sent by the sending terminal is received, the predetermined multimedia information is generated according to the text information, the predetermined multimedia information is multimedia information except the text information, the text information and the selection icon are displayed on the display interface, and at least part of the predetermined multimedia information can be displayed on the display interface in response to the first predetermined operation acted on the selection icon. In the device, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
According to an embodiment of the present application, there is also provided an electronic system, including a sending end and a receiving end, where the sending end is configured to send text information, the receiving end is configured to communicate with the sending end, and the receiving end is configured to execute any one of the text information processing methods.
In the electronic system, the transmitting end and the receiving end are provided, so that the first transmitting unit transmits text information, the second receiving unit receives the text information transmitted by the transmitting end and generates predetermined multimedia information according to the text information, the predetermined multimedia information is multimedia information except the text information, the third display unit displays the text information and the selection icon on the display interface of the receiving end, and the fourth display unit responds to a first predetermined operation acted on the selection icon, so that the display interface can display at least part of the predetermined multimedia information. In the system, the preset multimedia information can be generated by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
In order to make the technical solutions of the present application more clearly understood by those skilled in the art, the technical solutions and technical effects of the present application will be described below with reference to specific embodiments.
Examples
The electronic system includes a sending end and a receiving end, where the sending end is used to send text information, as shown in fig. 6, the work flow of the sending end is as follows:
the sending end displays an information editing interface, displays a theme information index number list, the display effect of the theme index number list is as shown in the table 1, and whether the theme information is selected or not is determined;
under the condition that the subject information is not selected, a user can edit the text information to generate text information comprising the text information, a sending end encrypts and compresses the text information comprising the text information to generate an encrypted file, and the sending end sends the encrypted file to a receiving end;
the method comprises the steps of determining whether body information is empty or not under the condition that the body information is selected, generating text information comprising the body information under the condition that the body information is empty, encrypting and compressing the text information comprising the body information by a sending end to generate an encrypted file, sending the encrypted file to a receiving end by the sending end, generating the text information comprising the body information and the body information under the condition that the body information is not empty, encrypting and compressing the text information comprising the body information and the body information by the sending end to generate the encrypted file, and sending the encrypted file to the receiving end by the sending end.
The receiving end communicates with the transmitting end, as shown in fig. 7, the working flow of the receiving end is as follows:
the receiving end receives the text information sent by the sending end, decrypts and decompresses the text information, determines whether the decrypted and decompressed text information comprises a theme index number, and determines the theme information corresponding to the theme index number according to a preset mapping relation under the condition that the theme index number is included to obtain the theme information and/or the text information;
inputting the theme information and/or the text information into a deep neural network model, wherein the deep neural network model is trained by using multiple groups of data through machine learning, and each group of data in the multiple groups of data comprises: training text information and training preset multimedia content corresponding to the training text information, and processing the text information by the deep neural network model to output preset multimedia information;
displaying at least one selection icon in a first area of a display interface, wherein the at least one selection icon comprises an animation selection icon and/or an image selection icon;
displaying the text information in a second area of the display interface;
and playing animation corresponding to the text information in a third area of the display interface, and/or displaying an image corresponding to the text information in the third area of the display interface.
According to the scheme, the preset multimedia information can be generated by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
The text information processing device comprises a processor and a memory, wherein the first receiving unit, the first generating unit, the first display unit, the second display unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the emotion interaction function of the short message is enhanced by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a storage medium on which a program is stored, which, when executed by a processor, implements the above-described method for processing text information.
The embodiment of the invention provides a processor, which is used for running a program, wherein the processing method of the text information is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, receiving text information sent by a sending end;
step S102, generating predetermined multimedia information according to the text information, wherein the predetermined multimedia information is multimedia information except the text information;
step S103, displaying the text information and the selection icon on a display interface;
step S104, in response to a first predetermined operation acting on the selection icon, displaying at least a part of the predetermined multimedia information on the display interface, or,
step S201, a sending end sends text information;
step S202, a receiving end receives the text information and generates preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information;
step S203, the display interface of the receiving end displays the text information and the selection icon;
step S204, in response to a first predetermined operation acting on the selection icon, a display interface of the receiving end displays at least a part of the predetermined multimedia information.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, receiving text information sent by a sending end;
step S102, generating predetermined multimedia information according to the text information, wherein the predetermined multimedia information is multimedia information except the text information;
step S103, displaying the text information and the selection icon on a display interface;
step S104, in response to a first predetermined operation acting on the selection icon, displaying at least a part of the predetermined multimedia information on the display interface, or,
step S201, a sending end sends text information;
step S202, a receiving end receives the text information and generates preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information;
step S203, the display interface of the receiving end displays the text information and the selection icon;
step S204, in response to a first predetermined operation acting on the selection icon, a display interface of the receiving end displays at least a part of the predetermined multimedia information.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the text information processing method comprises the steps of firstly receiving text information sent by a sending end, generating preset multimedia information according to the text information, displaying the text information and a selection icon on a display interface, and displaying at least part of the preset multimedia information on the display interface in response to a first preset operation acting on the selection icon. In the method, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
2) The method for processing the text information comprises the steps that firstly, a sending end sends the text information, a receiving end receives the text information sent by the sending end, preset multimedia information is generated according to the text information, the preset multimedia information is multimedia information except the text information, then, the text information and a selection icon are displayed on a display interface of the receiving end, and then, in response to a first preset operation acting on the selection icon, at least part of the preset multimedia information can be displayed on the display interface of the receiving end. In the method, the receiving end can generate the preset multimedia information by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, the receiving end can display at least part of the preset multimedia information on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
3) The first receiving unit receives text information sent by a sending end, the first generating unit generates preset multimedia information according to the text information, the preset multimedia information is multimedia information except the text information, the first display unit displays the text information and a selection icon on a display interface, and the second display unit responds to a first preset operation acting on the selection icon, so that at least part of the preset multimedia information can be displayed on the display interface. In the device, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
4) According to the processing device for the text information, the first sending unit sends the text information, the second receiving unit receives the text information sent by the sending terminal and generates the preset multimedia information according to the text information, the preset multimedia information is the multimedia information except the text information, the third display unit displays the text information and the selection icon on the display interface of the receiving terminal, and the fourth display unit responds to the first preset operation acting on the selection icon, so that at least part of the preset multimedia information can be displayed on the display interface. In the device, the preset multimedia information can be generated by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
5) The electronic equipment comprises the processor and the memory, so that the text information sent by the sending end is received, the preset multimedia information is generated according to the text information, the preset multimedia information is multimedia information except the text information, the text information and the selection icon are displayed on the display interface, and at least part of the preset multimedia information can be displayed on the display interface in response to a first preset operation acted on the selection icon. In the device, the preset multimedia information can be generated through the received text information, the display interface displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface through responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced through the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, good entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
6) The electronic system comprises the sending end and the receiving end, so that the first sending unit sends text information, the second receiving unit receives the text information sent by the sending end and generates preset multimedia information according to the text information, the preset multimedia information is multimedia information except the text information, the third display unit displays the text information and the selection icon on the display interface of the receiving end, and the fourth display unit responds to first preset operation acting on the selection icon, so that at least part of the preset multimedia information can be displayed on the display interface. In the system, the preset multimedia information can be generated by receiving the text information sent by the sending end, the display interface of the receiving end displays the text information and the selection icon, at least part of the preset multimedia information can be displayed on the display interface by responding to the first preset operation acted on the selection icon, the emotion interaction function of the short message can be enhanced by the displayed preset multimedia information, the visual experience effect of a user is improved on the basis of keeping the original short message use habit, better entertainment is provided for the user to check the display mode of the short message, and the requirements of the user are met.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method for processing text information, comprising:
receiving text information sent by a sending end;
generating predetermined multimedia information according to the text information, wherein the predetermined multimedia information is multimedia information except the text information;
displaying the text information and the selection icon on a display interface;
in response to a first predetermined operation acting on the selection icon, the display interface presents at least part of the predetermined multimedia information,
displaying the text information and the selection icon on a display interface, wherein the displaying comprises:
displaying at least one selection icon in a first area of the display interface, wherein the at least one selection icon comprises an animation selection icon and/or an image selection icon;
displaying the text information in a second area of the display interface;
in response to a first predetermined operation acting on the animation selection icon and/or the image selection icon, the display interface presents at least part of the predetermined multimedia information, including:
and playing animation corresponding to the text information in a third area of the display interface, and/or displaying an image corresponding to the text information in the third area of the display interface.
2. The processing method according to claim 1, wherein generating predetermined multimedia information from the text information comprises:
inputting the text information into a deep neural network model, wherein the deep neural network model is trained through machine learning by using multiple groups of data, and each group of data in the multiple groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information;
and the deep neural network model processes the text information and outputs the preset multimedia information.
3. The processing method of claim 2, wherein inputting the text information into a deep neural network model comprises:
processing the text information to obtain subject information and/or text information;
and inputting the subject information and/or the text information into the deep neural network model.
4. The processing method according to claim 3, wherein processing the text information to obtain subject information and/or body information comprises:
decrypting and decompressing the text information;
determining whether the decrypted and decompressed text information comprises a theme index number or not;
and determining the theme information corresponding to the theme index number according to a preset mapping relation under the condition of comprising the theme index number.
5. The processing method according to any one of claims 2 to 4, wherein before inputting the text information into a deep neural network model, the processing method further comprises: constructing the deep neural network model, wherein the constructing of the deep neural network model comprises the following steps:
converting the training text information into word embedding vectors;
inputting the word embedding vector into a coding neural network, and coding the word embedding vector to obtain an attention vector;
inputting the attention vector into at least one multimedia content generation network to generate prepared predetermined multimedia content;
and optimizing parameters of the coding neural network and the multimedia content generating network according to the prepared preset multimedia content and the training preset multimedia content to construct and obtain the deep neural network model.
6. The processing method of claim 5, wherein the multimedia content generation network comprises a convolutional image generation network, and wherein the training predetermined multimedia content comprises a training predetermined image.
7. The processing method according to claim 1, wherein the selection icon includes the image selection icon, and the image selection icon is a thumbnail image.
8. A method for processing text information, comprising:
a sending end sends text information;
the receiving end receives the text information and generates preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information;
a display interface of the receiving end displays the text information and the selection icon;
in response to a first predetermined operation acting on the selection icon, the display interface of the receiving end displays at least part of the predetermined multimedia information,
the display interface of the receiving end displays the text information and the selection icon, and the method comprises the following steps:
the first area of the display interface displays at least one selection icon, and the at least one selection icon comprises an animation selection icon and/or an image selection icon;
displaying the text information in a second area of the display interface;
in response to a first predetermined operation acting on the animation selection icon and/or the image selection icon, a display interface of the receiving end displays at least part of the predetermined multimedia information, and the method comprises the following steps:
and playing the animation corresponding to the text information in the third area of the display interface, and/or displaying the image corresponding to the text information in the third area of the display interface.
9. The processing method of claim 8, wherein before the sender sends the text message, the method further comprises:
the sending end analyzes the text information by adopting a deep neural network model to generate the preset multimedia information, wherein the deep neural network model is trained by using multiple groups of data through machine learning, and each group of data in the multiple groups of data comprises: training text information and training predetermined multimedia content corresponding to the training text information;
and the display interface of the sending end displays the preset multimedia information.
10. An apparatus for processing text information, comprising:
the first receiving unit is used for receiving the text information sent by the sending end;
the first generating unit is used for generating preset multimedia information according to the text information, wherein the preset multimedia information is multimedia information except the text information;
the first display unit is used for displaying the text information and the selection icon on a display interface;
a second display unit for responding to a first predetermined operation acted on the selection icon, wherein the display interface displays at least part of the predetermined multimedia information,
the first display unit comprises a first display module and a second display module, the first display module is used for displaying at least one selection icon in a first area of the display interface, and the at least one selection icon comprises an animation selection icon and/or an image selection icon; the second display module is used for displaying the text information in a second area of the display interface; the second display unit comprises a third display module, and the third display module is used for playing the animation corresponding to the text information in a third area of the display interface and/or displaying the image corresponding to the text information in the third area of the display interface.
11. A computer-readable storage medium, characterized in that the storage medium comprises a stored program, wherein the program performs the method of any one of claims 1 to 7.
12. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 7.
13. An electronic device, comprising: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of processing textual information of any of claims 1-7.
14. An electronic system, comprising:
the sending end sends text information;
a receiving end, in communication with the sending end, the receiving end being configured to execute the text information processing method according to any one of claims 1 to 7.
CN202011005074.6A 2020-09-22 2020-09-22 Text information processing method and device, storage medium, processor and electronic equipment Active CN112118359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011005074.6A CN112118359B (en) 2020-09-22 2020-09-22 Text information processing method and device, storage medium, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011005074.6A CN112118359B (en) 2020-09-22 2020-09-22 Text information processing method and device, storage medium, processor and electronic equipment

Publications (2)

Publication Number Publication Date
CN112118359A CN112118359A (en) 2020-12-22
CN112118359B true CN112118359B (en) 2021-06-29

Family

ID=73800979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011005074.6A Active CN112118359B (en) 2020-09-22 2020-09-22 Text information processing method and device, storage medium, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN112118359B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN110019883A (en) * 2017-07-18 2019-07-16 腾讯科技(深圳)有限公司 Obtain the method and device of expression picture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103259913B (en) * 2012-02-17 2016-04-13 百度在线网络技术(北京)有限公司 For the multimedia message editing method of mobile terminal, device and mobile terminal
US20180074661A1 (en) * 2016-09-14 2018-03-15 GM Global Technology Operations LLC Preferred emoji identification and generation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019883A (en) * 2017-07-18 2019-07-16 腾讯科技(深圳)有限公司 Obtain the method and device of expression picture
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment

Also Published As

Publication number Publication date
CN112118359A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
US20190342241A1 (en) Systems and methods for manipulating and/or concatenating videos
USRE44743E1 (en) Methods and apparatus for the composition and communication of digital composition coded multisensory messages (DCC MSMs)
CN113542228B (en) Data transmission method and device based on federal learning and readable storage medium
CN103780949B (en) A kind of multi-medium data method for recording
EP3185567A1 (en) Providing advanced playback and control functionality to video client
CN104035565A (en) Input method, input device, auxiliary input method and auxiliary input system
CN111402399A (en) Face driving and live broadcasting method and device, electronic equipment and storage medium
CN103177015A (en) Method and system for webpage image presentation
CN113315972A (en) Video semantic communication method and system based on hierarchical knowledge expression
CN112868224A (en) Techniques to capture and edit dynamic depth images
WO2022063124A1 (en) Video fusion method and device
CN104424498A (en) Method and device for transforming two-dimensional codes at high speed
CN110674706A (en) Social contact method and device, electronic equipment and storage medium
CN112118359B (en) Text information processing method and device, storage medium, processor and electronic equipment
CN106209575A (en) Method for sending information, acquisition methods, device and interface system
CN113763232A (en) Image processing method, device, equipment and computer readable storage medium
CN112363791A (en) Screen recording method and device, storage medium and terminal equipment
US20160173910A1 (en) Image encoding apparatus and image decoding apparatus and method of operating the same
CN116168108A (en) Method and device for generating image through text, storage medium and electronic equipment
CN105719522A (en) Dual-client-terminal speech communication method, device and system
CN111553961B (en) Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device
KR101166529B1 (en) Apparatus of generating haptic rendering, method of encoding touch data, generating haptic rendering and recording medium thereof
CN111552871A (en) Information pushing method and device based on application use record and related equipment
CN111813969A (en) Multimedia data processing method and device, electronic equipment and computer storage medium
KR102603972B1 (en) Method and device for providing web ar-based business information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant