CN114296560A - Method, apparatus, medium, and program product for presenting text messages - Google Patents

Method, apparatus, medium, and program product for presenting text messages Download PDF

Info

Publication number
CN114296560A
CN114296560A CN202111641930.1A CN202111641930A CN114296560A CN 114296560 A CN114296560 A CN 114296560A CN 202111641930 A CN202111641930 A CN 202111641930A CN 114296560 A CN114296560 A CN 114296560A
Authority
CN
China
Prior art keywords
user
text message
text
predetermined
determining whether
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111641930.1A
Other languages
Chinese (zh)
Inventor
刘雅楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202111641930.1A priority Critical patent/CN114296560A/en
Publication of CN114296560A publication Critical patent/CN114296560A/en
Pending legal-status Critical Current

Links

Images

Abstract

An object of the present application is to provide a method, apparatus, medium and program product for presenting a text message, the method comprising: acquiring a text message sent by a second user in a target application, wherein a first user and the second user corresponding to the first user equipment are different users in the target application; determining whether the text message matches at least one of one or more predetermined animation effects; and if so, presenting the text message in the target application, and executing at least one preset animation effect matched with the first text content aiming at the text message. According to the method and the device, the corresponding animation effect is triggered by the specific characters sent by the user in the target application, so that the interactivity and interestingness of the character messages in the target application can be enhanced, and the user experience of the target application can be improved.

Description

Method, apparatus, medium, and program product for presenting text messages
Technical Field
The present application relates to the field of communications, and more particularly, to a technique for presenting text messages.
Background
In the prior art, a user may send a chat message in a private chat session or a group chat session of a social application, where the chat message is visible to a session object of the private chat session or a group member of the group chat session, and when the user watches a video in a video application, the user may send a comment message about the video, where the comment message is visible to other users watching the video, and when the user watches a live broadcast in a live broadcast room of a live broadcast application, the user may send a bullet screen message about the live broadcast, and the bullet screen message is visible to other users in the live broadcast room.
Disclosure of Invention
It is an object of the present application to provide a method, apparatus, medium, and program product for presenting a text message.
According to one aspect of the present application, there is provided a method for presenting a text message, the method comprising:
acquiring a text message sent by a second user in a target application, wherein a first user and the second user corresponding to the first user equipment are different users in the target application;
determining whether the text message matches at least one of one or more predetermined animation effects;
and if so, presenting the text message in the target application, and executing at least one preset animation effect matched with the first text content aiming at the text message.
According to one aspect of the present application, there is provided a first user equipment for presenting a text message, the equipment comprising:
a module, configured to obtain a text message sent by a second user in a target application, where a first user and the second user corresponding to the first user equipment are different users in the target application;
a second module for determining whether the text message matches at least one of one or more predetermined animation effects;
and a third module, configured to, if yes, present the text message in the target application, and execute at least one predetermined animation effect matched with the first text content for the text message.
According to one aspect of the application, there is provided a computer device for presenting a text message, comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the operations of any of the methods described above.
According to an aspect of the application, there is provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the operations of any of the methods described above.
According to an aspect of the application, a computer program product is provided, comprising a computer program which, when executed by a processor, carries out the steps of any of the methods as described above.
Compared with the prior art, the text message that this application sent through acquireing the second user in target application (social contact application, video application, live broadcast application etc.) confirms whether text message matches with at least one predetermined animation effect in one or more predetermined animation effects, if, present in the target application text message, aim at text message execution with at least one predetermined animation effect of first text content assorted to through the animation effect that triggers the correspondence to the specific characters that the user sent in the target application, can strengthen the interactive and interesting of text message in the target application, can improve the user experience of target application.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for presenting a text message according to one embodiment of the present application;
FIG. 2 illustrates a first user equipment block diagram for presenting a text message according to one embodiment of the present application;
FIG. 3 illustrates a schematic diagram of presenting a text message;
FIG. 4 illustrates a schematic diagram of presenting a text message;
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in the present application includes, but is not limited to, a terminal, a network device, or a device formed by integrating a terminal and a network device through a network. The terminal includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the terminal, the network device, or a device formed by integrating the terminal and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a flowchart of a method for presenting a text message according to an embodiment of the present application, the method comprising step S11, step S12, and step S13. In step S11, a first user device obtains a text message sent by a second user in a target application, where a first user and the second user corresponding to the first user device are different users in the target application; in step S12, the first user device determines whether the text message matches at least one of one or more predetermined animation effects; in step S13, if yes, the first user equipment presents the text message in the target application, and executes the at least one predetermined animation effect for the text message.
In step S11, a first user device acquires a text message sent by a second user in a target application, where the first user and the second user corresponding to the first user device are different users in the target application. In some embodiments, the first user device is a user device used by the first user, and the first user and the second user are two different users in the target application. In some embodiments, the target application includes, but is not limited to, a social application, a video application, and a live application, for example, if the target application is a social application, the text message may be a session message sent by the second user in a certain private chat session or a certain group session, if the target application is a video application, the text message may be a comment message sent by the second user while watching a certain video, and if the target application is a live application, the text message may be a bullet screen message sent by the second user in a certain live session. In some embodiments, the text message may be sent directly to the first user equipment by the second user equipment used by the second user, or may also be sent to the first user equipment by the second user equipment via the network device corresponding to the target application.
In step S12, the first user device determines whether the text message matches at least one of one or more predetermined animation effects. In some embodiments, the predetermined animation effect includes, but is not limited to, any animation effect associated with the text, such as a text cracking effect, a text portion stroke enlargement effect, a text dithering effect, a text scrolling effect, and the like. In some embodiments, the specific determination may be to determine whether the text message includes the first text content matching at least one of the one or more predetermined animation effects, or may also be to determine whether the voice information and/or the user emotion information corresponding to the text message matches at least one of the one or more predetermined animation effects. In some embodiments, if the text message does not match any of the one or more predetermined animation effects, the text message is presented directly in the target application without performing the one or more predetermined animation effects on the text message. In some embodiments, if the text message matches a plurality of predetermined animation effects of the one or more predetermined animation effects, a corresponding target predetermined animation effect with the highest matching degree may be determined among the plurality of predetermined animation effects according to the matching degree (or similarity) between each predetermined animation effect and the text message, the text message may be presented in the target application, and the target predetermined animation effect may be executed with respect to the text message, or a corresponding target predetermined animation effect with the highest presentation priority may be determined among the plurality of predetermined animation effects according to the presentation priority corresponding to each predetermined animation effect, the text message may be presented in the target application, and the target predetermined animation effect may be executed with respect to the text message. In some embodiments, the one or more predetermined animation effects may be configured by the target application by default, or may be configured by the second user and synchronized to the first user device via a server corresponding to the target application, or may be configured by the first user. In some embodiments, the server corresponding to the target application, or the first user or the second user, establishes a mapping relationship between the keyword and the predetermined animation effect in advance, and after receiving the text message sent by the first user, the subsequent first user equipment determines whether the text message includes the keyword mapped by at least one predetermined animation effect of the one or more predetermined animation effects according to the mapping relationship, and if so, may determine that the text message matches with the at least one predetermined animation effect.
In step S13, if yes, the first user equipment presents the text message in the target application, and executes the at least one predetermined animation effect for the text message. In some embodiments, if the target application is a social application, the text message may be presented in a conversation window corresponding to a private chat conversation or a group conversation of the target application, and the at least one predetermined animation effect may be performed for the text message. In some embodiments, if the target application is a video application, the text message may be presented in a comment area corresponding to the video, and the at least one predetermined animation effect may be performed for the text message. In some embodiments, if the target application is a live application, the text message may be presented in a bullet screen area corresponding to a live broadcast room, and the at least one predetermined animation effect may be performed on the text message. As an example, as shown in fig. 3, the target application is a social application, 5 conversation messages are presented on a private chat conversation interface of the social application, and since the 5 conversation messages do not match any of one or more predetermined animation effects, the 5 conversation messages are normally presented directly without performing the one or more predetermined animation effects on the text in the 5 conversation messages. As an example, as shown in FIG. 4, the target application is a social application, 5 conversation messages are presented on a private chat conversation interface of the social application, the 1 st conversation message does not match any of one or more predetermined animation effects, the 1 st conversation message is presented normally, the 2 nd conversation message matches a text portion stroke magnification effect, the text in the 2 nd conversation message will have the text portion stroke magnification effect, the 3 rd conversation message will match the text break effect, the text in the 3 rd conversation message will have the text cracking effect, the 4 th conversation message will match the text dithering effect, the text in the 4 th conversation message will exhibit the text dithering effect and the 5 th conversation message will match the text scrolling effect, the text scrolling effect will occur in the text in the 5 th conversation message. According to the method and the device, the corresponding animation effect is triggered by the specific characters sent by the user in the target application, so that the interactivity and interestingness of the character messages in the target application can be enhanced, and the user experience of the target application can be improved.
In some embodiments, the step S12 includes: the first user equipment determines whether first text content matched with at least one preset animation effect in one or more preset animation effects is included in the text message; and if so, determining that the text message is matched with the at least one preset animation effect. In some embodiments, it may be determined whether the text effect matches at least one of the one or more predetermined animation effects based on whether the first text content matching the at least one predetermined animation effect is included in the text message. In some embodiments, the specific determination of whether the text message includes the first text content may be a determination of whether the text message includes the first text content describing at least one of the one or more predetermined animation effects, or may also be a determination of whether the text message includes the first text content similar to a second text content describing at least one of the one or more predetermined animation effects.
In some embodiments, said determining whether the text message includes first text content matching at least one of the one or more predetermined animation effects comprises: it is determined whether first textual content describing at least one of one or more predetermined animation effects is included in the textual message. In some embodiments, the first text content is descriptive text for describing a predetermined animation effect, e.g., the first text content corresponding to the text cracking effect is "cracked" and the first text content corresponding to the text scrolling effect is "scrolled". In some embodiments, the first text content corresponding to each predetermined animation effect may be set by the target application by default, or may be set by the second user and synchronized to the first user device via the server corresponding to the target application, or may be set by the first user, or may be automatically determined by performing image recognition on the predetermined animation effect. For example, if the one or more predetermined animation effects include a text splitting effect, and the first text content corresponding to the text splitting effect is "split", and if the first text content "split" is included in the text message, it is determined that the text message includes the first text content matching the text splitting effect. For another example, if the one or more predetermined animation effects include a text scrolling effect, and the first text content corresponding to the text scrolling effect is "scrolling", it is determined that the text message includes the first text content matching the text scrolling effect if the text message includes the first text content "scrolling".
In some embodiments, said determining whether the text message includes first text content matching at least one of the one or more predetermined animation effects comprises: it is determined whether first textual content that is similar to second textual content describing at least one of the one or more predetermined animation effects is included in the textual message. In some embodiments, the second text content is a description text for describing the predetermined animation effect, and the second text content is the same as or similar to the first text content described above, and is not repeated herein. In some embodiments, it is determined whether the first textual content that matches at least one of the one or more predetermined animation effects is included in the textual message by determining whether the first textual content satisfies a predetermined similarity to the second textual content is included in the textual message. For example, if the one or more predetermined animation effects include a text splitting effect, and the second text content corresponding to the text splitting effect is "split", and if the text message includes a first text content (e.g., "headache", "no language", or crash ") satisfying a predetermined similarity with the second text content" split ", it is determined that the text message includes a first text content matching the text splitting effect. For another example, if the one or more predetermined animation effects include a text scrolling effect, and the second text content corresponding to the text scrolling effect is "scrolling", and if the text message includes a first text content that satisfies a predetermined similarity with the second text content "scrolling" (e.g., "turn over", "gymnastic", or "leave"), it is determined that the text message includes a first text content that matches the text scrolling effect. In some embodiments, a word may be segmented, the word message may be split or decomposed into a plurality of words, then a similarity between each word and the second text content is determined, and if the similarity between the word and the second text content is greater than or equal to a predetermined similarity threshold, the word is determined to be the first text content similar to the second text content, and it may be determined that the first text content similar to the second text content is included in the word message.
In some embodiments, said performing said at least one predetermined animation effect on said text message comprises: performing the at least one predetermined animation effect only with respect to the first textual content. In some embodiments, after it is determined that the text message includes the first text content matching at least one of the one or more predetermined animation effects, the at least one predetermined animation effect is not performed for all text messages, the at least one predetermined animation effect is performed only for a portion of the text (i.e., the first text content) in the text message, and the at least one predetermined animation effect is not performed for the remaining text in the text message other than the first text content.
In some embodiments, the step S12 includes: the first user equipment carries out semantic recognition on the text message to obtain semantic information corresponding to the text message; determining whether the semantic information matches at least one of one or more predetermined animation effects; and if so, determining that the text message is matched with the at least one preset animation effect. In some embodiments, semantic information corresponding to a text message is obtained by performing semantic recognition on the text message, and then it is determined whether the semantic information matches with at least one predetermined animation effect of one or more predetermined animation effects, where the specific determination may be determining a similarity between the semantic information and third text content for describing the at least one predetermined animation effect, and if the similarity is greater than or equal to a predetermined similarity threshold, it may be determined that the semantic information matches with the at least one predetermined animation effect. In some embodiments, the third text content is a description text for describing the predetermined animation effect, and the third text content is the same as or similar to the first text content described above, and is not repeated herein.
In some embodiments, the step S12 includes: the first user equipment determines user emotion information corresponding to the text message; determining whether the user emotional information matches at least one of one or more predetermined animation effects; and if so, determining that the text message is matched with the at least one preset animation effect. In some embodiments, the user emotion information is used to characterize the emotion of the second user expressed by the text message. In some embodiments, the emotion information of the user corresponding to the text message may be obtained by performing emotion recognition on the text message, or one or more emotion words in the text message may be determined first, and then the emotion information of the user corresponding to the text message may be determined according to the one or more emotion words. In some embodiments, the user emotion information corresponding to the text message is determined, and then it is determined whether the user emotion information matches with at least one predetermined animation effect of the one or more predetermined animation effects, where the determination may be a matching degree between the user emotion information and third text content describing the at least one predetermined animation effect, and if the matching degree is greater than or equal to a predetermined matching degree threshold, it may be determined that the user emotion information matches with the at least one predetermined animation effect. For example, if the one or more predetermined animation effects include a word-breaking effect and the second textual content corresponding to the word-breaking effect is "break", it may be determined that the user's emotional information "anger" matches the word-breaking effect if the user's emotional information corresponding to the text message is "angry".
In some embodiments, the determining of the user emotion information corresponding to the text message includes: and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message. In some embodiments, the emotion information of the second user expressed by the text message may be determined by performing emotion recognition on the text message.
In some embodiments, the performing emotion recognition on the text message to obtain user emotion information corresponding to the text message includes: and combining at least one historical message sent by the second user before the text message, and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message. In some embodiments, emotion recognition may be performed on the text message in conjunction with a predetermined number (e.g., the first 5) of historical messages sent by the second user prior to the text message, or in conjunction with historical messages sent by the second user within a predetermined time frame (e.g., the first 5 minutes) prior to the text message, to determine the user emotional information of the second user expressed by the text message. In some embodiments, the context information may be determined from at least one historical message sent by the second user prior to the text message, and then emotion recognition may be performed on the text message based on the context information to determine the emotion information of the second user expressed by the text message.
In some embodiments, the determining of the user emotion information corresponding to the text message includes: determining one or more emotional words in the text message; and determining the user emotion information corresponding to the text message according to the one or more emotion words. In some embodiments, a word message may be segmented, the word message may be split or decomposed into a plurality of words, one or more emotion words for expressing user emotion are determined from the plurality of words, and then user emotion information of a second user expressed by the word message may be obtained by performing emotion analysis on the one or more emotion words.
In some embodiments, the target application is a social application; wherein the step S12 includes: the first user equipment determines whether the first user and the second user meet a preset user relationship; if so, determining whether the text message is matched with at least one preset animation effect in one or more preset animation effects; otherwise, the text message is presented in the target application. In some embodiments, if the target application is a social application, it may be determined whether a predetermined user relationship (e.g., a non-context relationship, a close-friend relationship, etc.) is satisfied between the first user and the second user according to the personal user information of the first user, the personal user information of the second user, the historical conversation messaging record of the first user, the historical conversation messaging record of the second user, and the like, if so, it may be determined whether the text message matches at least one predetermined animation effect of the one or more predetermined animation effects, otherwise, the text message may be directly presented in the target application without performing the one or more predetermined animation effects on the text message. For example, if the first user and the second user are related to one another or a stranger, it is not necessary to determine whether the text message matches at least one of the one or more predetermined animation effects, but the text message is directly presented in the target application without performing the one or more predetermined animation effects on the text message.
In some embodiments, said determining whether said first user and said second user satisfy a predetermined user relationship comprises: and determining whether the first user and the second user meet a preset user relationship or not according to friend intimacy information between the first user and the second user. In some embodiments, the first user and the second user are friend relationships in a social application, and the determination of friend affinity between the first user and the second user may be performed according to historical conversation messaging records of the first user and/or the second user in a private chat session, and/or according to historical audiovisual call records of the first user and/or the second user in the private chat session, and/or according to historical approval records and historical comment records of the first user in a posting space (e.g., a circle of friends) of the second user, and/or according to historical approval records and historical comment records of the second user in the posting space of the first user.
In some embodiments, said determining whether said first user and said second user satisfy a predetermined user relationship comprises: and determining whether the first user and the second user meet a preset user relationship or not according to the position information and/or department information corresponding to the first user and the second user respectively. In some embodiments, whether the first user and the second user are in a non-up-down relationship or not may be determined according to the position information and/or the department information in the personal user information of the first user and the position information and/or the department information in the personal user information of the second user, and if the first user and the second user are in a non-up-down relationship, it may be determined that a predetermined user relationship is satisfied between the first user and the second user.
Fig. 2 shows a block diagram of a first user equipment for presenting text messages according to an embodiment of the present application, the equipment comprising a one-module 11, a two-module 12 and a three-module 13. A module 11, configured to obtain a text message sent by a second user in a target application, where a first user and the second user corresponding to the first user equipment are different users in the target application; a second module 12 for determining whether the text message matches at least one of one or more predetermined animation effects; and a third module 13, configured to, if yes, present the text message in the target application, and execute the at least one predetermined animation effect for the text message.
A module 11, configured to obtain a text message sent by a second user in a target application, where a first user and the second user corresponding to the first user equipment are different users in the target application. In some embodiments, the first user device is a user device used by the first user, and the first user and the second user are two different users in the target application. In some embodiments, the target application includes, but is not limited to, a social application, a video application, and a live application, for example, if the target application is a social application, the text message may be a session message sent by the second user in a certain private chat session or a certain group session, if the target application is a video application, the text message may be a comment message sent by the second user while watching a certain video, and if the target application is a live application, the text message may be a bullet screen message sent by the second user in a certain live session. In some embodiments, the text message may be sent directly to the first user equipment by the second user equipment used by the second user, or may also be sent to the first user equipment by the second user equipment via the network device corresponding to the target application.
A second module 12 for determining whether the text message matches at least one of one or more predetermined animation effects. In some embodiments, the predetermined animation effect includes, but is not limited to, any animation effect associated with the text, such as a text cracking effect, a text portion stroke enlargement effect, a text dithering effect, a text scrolling effect, and the like. In some embodiments, the specific determination may be to determine whether the text message includes the first text content matching at least one of the one or more predetermined animation effects, or may also be to determine whether the voice information and/or the user emotion information corresponding to the text message matches at least one of the one or more predetermined animation effects. In some embodiments, if the text message does not match any of the one or more predetermined animation effects, the text message is presented directly in the target application without performing the one or more predetermined animation effects on the text message. In some embodiments, if the text message matches a plurality of predetermined animation effects of the one or more predetermined animation effects, a corresponding target predetermined animation effect with the highest matching degree may be determined among the plurality of predetermined animation effects according to the matching degree (or similarity) between each predetermined animation effect and the text message, the text message may be presented in the target application, and the target predetermined animation effect may be executed with respect to the text message, or a corresponding target predetermined animation effect with the highest presentation priority may be determined among the plurality of predetermined animation effects according to the presentation priority corresponding to each predetermined animation effect, the text message may be presented in the target application, and the target predetermined animation effect may be executed with respect to the text message. In some embodiments, the one or more predetermined animation effects may be configured by the target application by default, or may be configured by the second user and synchronized to the first user device via a server corresponding to the target application, or may be configured by the first user. In some embodiments, the server corresponding to the target application, or the first user or the second user, establishes a mapping relationship between the keyword and the predetermined animation effect in advance, and after receiving the text message sent by the first user, the subsequent first user equipment determines whether the text message includes the keyword mapped by at least one predetermined animation effect of the one or more predetermined animation effects according to the mapping relationship, and if so, may determine that the text message matches with the at least one predetermined animation effect.
And a third module 13, configured to, if yes, present the text message in the target application, and execute the at least one predetermined animation effect for the text message. In some embodiments, if the target application is a social application, the text message may be presented in a conversation window corresponding to a private chat conversation or a group conversation of the target application, and the at least one predetermined animation effect may be performed for the text message. In some embodiments, if the target application is a video application, the text message may be presented in a comment area corresponding to the video, and the at least one predetermined animation effect may be performed for the text message. In some embodiments, if the target application is a live application, the text message may be presented in a bullet screen area corresponding to a live broadcast room, and the at least one predetermined animation effect may be performed on the text message. As an example, as shown in fig. 3, the target application is a social application, 5 conversation messages are presented on a private chat conversation interface of the social application, and since the 5 conversation messages do not match any of one or more predetermined animation effects, the 5 conversation messages are normally presented directly without performing the one or more predetermined animation effects on the text in the 5 conversation messages. As an example, as shown in FIG. 4, the target application is a social application, 5 conversation messages are presented on a private chat conversation interface of the social application, the 1 st conversation message does not match any of one or more predetermined animation effects, the 1 st conversation message is presented normally, the 2 nd conversation message matches a text portion stroke magnification effect, the text in the 2 nd conversation message will have the text portion stroke magnification effect, the 3 rd conversation message will match the text break effect, the text in the 3 rd conversation message will have the text cracking effect, the 4 th conversation message will match the text dithering effect, the text in the 4 th conversation message will exhibit the text dithering effect and the 5 th conversation message will match the text scrolling effect, the text scrolling effect will occur in the text in the 5 th conversation message. According to the method and the device, the corresponding animation effect is triggered by the specific characters sent by the user in the target application, so that the interactivity and interestingness of the character messages in the target application can be enhanced, and the user experience of the target application can be improved.
In some embodiments, the secondary module 12 is configured to: determining whether first text content matching at least one of one or more predetermined animation effects is included in the text message; and if so, determining that the text message is matched with the at least one preset animation effect. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, said determining whether the text message includes first text content matching at least one of the one or more predetermined animation effects comprises: it is determined whether first textual content describing at least one of one or more predetermined animation effects is included in the textual message. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, said determining whether the text message includes first text content matching at least one of the one or more predetermined animation effects comprises: it is determined whether first textual content that is similar to second textual content describing at least one of the one or more predetermined animation effects is included in the textual message. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, said performing said at least one predetermined animation effect on said text message comprises: performing the at least one predetermined animation effect only with respect to the first textual content. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the secondary module 12 is configured to: performing semantic recognition on the text message to obtain semantic information corresponding to the text message; determining whether the semantic information matches at least one of one or more predetermined animation effects; and if so, determining that the text message is matched with the at least one preset animation effect. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the secondary module 12 is configured to: determining user emotion information corresponding to the text message; determining whether the user emotional information matches at least one of one or more predetermined animation effects; and if so, determining that the text message is matched with the at least one preset animation effect. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the determining of the user emotion information corresponding to the text message includes: and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the performing emotion recognition on the text message to obtain user emotion information corresponding to the text message includes: and combining at least one historical message sent by the second user before the text message, and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the determining of the user emotion information corresponding to the text message includes: determining one or more emotional words in the text message; and determining the user emotion information corresponding to the text message according to the one or more emotion words. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the target application is a social application; wherein the second module 12 is configured to: determining whether the first user and the second user satisfy a predetermined user relationship; if so, determining whether the text message is matched with at least one preset animation effect in one or more preset animation effects; otherwise, the text message is presented in the target application. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, said determining whether said first user and said second user satisfy a predetermined user relationship comprises: and determining whether the first user and the second user meet a preset user relationship or not according to friend intimacy information between the first user and the second user. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, said determining whether said first user and said second user satisfy a predetermined user relationship comprises: and determining whether the first user and the second user meet a preset user relationship or not according to the position information and/or department information corresponding to the first user and the second user respectively. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 5, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (16)

1. A method for presenting a text message, applied to a first user equipment, wherein the method comprises:
acquiring a text message sent by a second user in a target application, wherein a first user and the second user corresponding to the first user equipment are different users in the target application;
determining whether the text message matches at least one of one or more predetermined animation effects;
and if so, presenting the text message in the target application, and executing the at least one preset animation effect aiming at the text message.
2. The method of claim 1, wherein said determining whether the text message matches at least one of one or more predetermined animation effects comprises:
determining whether first text content matching at least one of one or more predetermined animation effects is included in the text message;
and if so, determining that the text message is matched with the at least one preset animation effect.
3. The method of claim 2, wherein said determining whether the text message includes first text content that matches at least one of the one or more predetermined animation effects comprises:
it is determined whether first textual content describing at least one of one or more predetermined animation effects is included in the textual message.
4. The method of claim 2, wherein said determining whether the text message includes first text content that matches at least one of the one or more predetermined animation effects comprises:
it is determined whether first textual content that is similar to second textual content describing at least one of the one or more predetermined animation effects is included in the textual message.
5. The method of claim 2, wherein said performing said at least one predetermined animation effect on said text message comprises:
performing the at least one predetermined animation effect only with respect to the first textual content.
6. The method of claim 1, wherein said determining whether the text message matches at least one of one or more predetermined animation effects comprises:
performing semantic recognition on the text message to obtain semantic information corresponding to the text message;
determining whether the semantic information matches at least one of one or more predetermined animation effects;
and if so, determining that the text message is matched with the at least one preset animation effect.
7. The method of claim 1, wherein said determining whether the text message matches at least one of one or more predetermined animation effects comprises:
determining user emotion information corresponding to the text message;
determining whether the user emotional information matches at least one of one or more predetermined animation effects;
and if so, determining that the text message is matched with the at least one preset animation effect.
8. The method of claim 7, wherein the determining of the user emotion information corresponding to the text message comprises:
and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message.
9. The method of claim 8, wherein the performing emotion recognition on the text message to obtain emotion information of the user corresponding to the text message comprises:
and combining at least one historical message sent by the second user before the text message, and performing emotion recognition on the text message to obtain user emotion information corresponding to the text message.
10. The method of claim 7, wherein the determining of the user emotion information corresponding to the text message comprises:
determining one or more emotional words in the text message;
and determining the user emotion information corresponding to the text message according to the one or more emotion words.
11. The method of claim 1, wherein the target application is a social application;
wherein, the determining of the user emotion information corresponding to the text message includes:
determining whether the first user and the second user satisfy a predetermined user relationship;
if so, determining whether the text message is matched with at least one preset animation effect in one or more preset animation effects; otherwise, the text message is presented in the target application.
12. The method of claim 11, wherein the determining whether the first user and the second user satisfy a predetermined user relationship comprises:
and determining whether the first user and the second user meet a preset user relationship according to friend intimacy information between the first user and the second user.
13. The method of claim 11, wherein the determining whether the first user and the second user satisfy a predetermined user relationship comprises:
and determining whether the first user and the second user meet a preset user relationship or not according to the position information and/or department information corresponding to the first user and the second user respectively.
14. A computer device for presenting a text message, comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method according to any of claims 1 to 13.
15. A computer-readable storage medium, on which a computer program/instructions are stored, which, when being executed by a processor, carry out the steps of the method according to any one of claims 1 to 13.
16. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method according to any one of claims 1 to 13 when executed by a processor.
CN202111641930.1A 2021-12-29 2021-12-29 Method, apparatus, medium, and program product for presenting text messages Pending CN114296560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111641930.1A CN114296560A (en) 2021-12-29 2021-12-29 Method, apparatus, medium, and program product for presenting text messages

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111641930.1A CN114296560A (en) 2021-12-29 2021-12-29 Method, apparatus, medium, and program product for presenting text messages

Publications (1)

Publication Number Publication Date
CN114296560A true CN114296560A (en) 2022-04-08

Family

ID=80970739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111641930.1A Pending CN114296560A (en) 2021-12-29 2021-12-29 Method, apparatus, medium, and program product for presenting text messages

Country Status (1)

Country Link
CN (1) CN114296560A (en)

Similar Documents

Publication Publication Date Title
CN110417641B (en) Method and equipment for sending session message
CN110336735B (en) Method and equipment for sending reminding message
CN110266505B (en) Method and equipment for managing session group
CN110827061B (en) Method and equipment for providing presentation information in novel reading process
CN110765395B (en) Method and equipment for providing novel information
CN110336733B (en) Method and equipment for presenting emoticon
CN112822431B (en) Method and equipment for private audio and video call
CN110781397A (en) Method and equipment for providing novel information
CN110768894B (en) Method and equipment for deleting session message
CN110535755B (en) Method and equipment for deleting session message
CN112818719B (en) Method and equipment for identifying two-dimensional code
CN111817945B (en) Method and equipment for replying communication information in instant communication application
CN113704638A (en) Method and equipment for identifying presentation information in social group chat
CN112787831B (en) Method and device for splitting conference group
CN110780955A (en) Method and equipment for processing emoticon message
CN111327518A (en) Method and equipment for splicing messages
CN113329237B (en) Method and equipment for presenting event label information
CN115719053A (en) Method and equipment for presenting reader labeling information
CN114296560A (en) Method, apparatus, medium, and program product for presenting text messages
CN114429361A (en) Method, device, medium and program product for extracting resource
CN111680249B (en) Method and device for pushing presentation information
CN113157162A (en) Method, apparatus, medium and program product for revoking session messages
CN112788004A (en) Method and equipment for executing instructions through virtual conference robot
CN112583696A (en) Method and equipment for processing group session message
CN114301861B (en) Method, equipment and medium for presenting mail

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination