WO2021013126A1 - Procédé et dispositif d'envoi de message de conversation - Google Patents

Procédé et dispositif d'envoi de message de conversation Download PDF

Info

Publication number
WO2021013126A1
WO2021013126A1 PCT/CN2020/103032 CN2020103032W WO2021013126A1 WO 2021013126 A1 WO2021013126 A1 WO 2021013126A1 CN 2020103032 W CN2020103032 W CN 2020103032W WO 2021013126 A1 WO2021013126 A1 WO 2021013126A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
user
conversation
voice
target
Prior art date
Application number
PCT/CN2020/103032
Other languages
English (en)
Chinese (zh)
Inventor
罗剑嵘
Original Assignee
上海盛付通电子支付服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海盛付通电子支付服务有限公司 filed Critical 上海盛付通电子支付服务有限公司
Publication of WO2021013126A1 publication Critical patent/WO2021013126A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Definitions

  • This application relates to the field of communications, and in particular to a technology for sending session messages.
  • the social application in the prior art only supports sending the voice message recorded by the user separately. For example, the user presses the record button on a conversation page of the social application to start recording the voice, and when the user lets go, the voice message recorded by the user is directly sent.
  • One purpose of this application is to provide a method and device for sending session messages.
  • a method for sending a session message including:
  • Generate an atomic conversation message and send the atomic conversation message to a second user communicating with the first user on the conversation page via a social server, wherein the atomic conversation message includes the voice message and the target Emoji message.
  • a method for presenting session messages including:
  • atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message;
  • the atomic conversation message is presented in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page.
  • a user equipment for sending a session message including:
  • a second module configured to determine the target emoticon message corresponding to the voice message in response to the triggering operation of the voice message sent by the first user
  • a three-module configured to generate an atomic conversation message, and send the atomic conversation message to a second user who communicates with the first user on the conversation page via a social server, wherein the atomic conversation message includes the The voice message and the target emoticon message.
  • a user equipment for presenting conversation messages including:
  • the two-one module is configured to receive an atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message;
  • the second and second module is used to present the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message in the conversation page frame.
  • a device for sending a session message wherein the device includes:
  • Generate an atomic conversation message and send the atomic conversation message to a second user communicating with the first user on the conversation page via a social server, wherein the atomic conversation message includes the voice message and the target Emoji message.
  • a device for presenting session messages wherein the device includes:
  • atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message;
  • the atomic conversation message is presented in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page.
  • a computer-readable medium storing instructions, which when executed cause the system to perform the following operations:
  • Generate an atomic conversation message and send the atomic conversation message to a second user communicating with the first user on the conversation page via a social server, wherein the atomic conversation message includes the voice message and the target Emoji message.
  • a computer-readable medium storing instructions, which when executed cause the system to perform the following operations:
  • atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message;
  • the atomic conversation message is presented in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page.
  • the present application obtains the user emotion corresponding to the voice message by performing voice analysis on the voice message entered by the user, and automatically generates the expression message corresponding to the voice message according to the user emotion, and treats the voice message and the expression message as one Atomic conversation messages are sent to social objects, and presented in the same message box in the form of atomic conversation messages on the conversation page of the social objects, which can enable users to express their emotions more accurately and vividly, improve the efficiency of sending emoticons, and enhance The user experience, and can avoid the problem of sending voice messages and emoticons as two messages in a group conversation that may be interrupted by other users’ conversation messages and thus affect the smoothness of the user’s expression.
  • Fig. 1 shows a flowchart of a method for sending a session message according to some embodiments of the present application
  • Fig. 2 shows a flowchart of a method for presenting session messages according to some embodiments of the present application
  • FIG. 3 shows a flowchart of a system method for presenting conversation messages according to some embodiments of the present application
  • Fig. 4 shows a structural diagram of a device for sending session messages according to some embodiments of the present application
  • Fig. 5 shows a structural diagram of a device for presenting session messages according to some embodiments of the present application
  • Figure 6 shows an exemplary system that can be used to implement the various embodiments described in this application
  • FIG. 7 shows a schematic diagram of presenting session messages according to some embodiments of the present application.
  • FIG. 8 shows a schematic diagram of presenting a session message according to some embodiments of the present application.
  • the terminal, the device of the service network, and the trusted party all include one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment formed by the integration of user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touchpad), such as a smart phone, a tablet computer, etc., and the mobile electronic product can adopt any operation System, such as android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing in accordance with pre-set or stored instructions.
  • Its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (ASIC), and programmable logic.
  • the network device includes, but is not limited to, a computer, a network host, a single network server, a set of multiple network servers, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on Cloud Computing, Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and a wireless ad hoc network (Ad Hoc network).
  • the device may also be a program running on the user equipment, network equipment, or user equipment and network equipment, network equipment, touch terminal or a device formed by integrating network equipment and touch terminal through a network.
  • the emoticon message is input and sent to the social object as a new conversation message.
  • the operation is cumbersome and due to Possible network delays and other factors will cause social objects to fail to receive emoticon messages in time, and affect the expression of user emotions corresponding to the voice messages.
  • the voice messages and emoticons may be talked to by other users.
  • the message is interrupted, which affects the smoothness of the user’s expression.
  • the voice message and the emoticon message are presented as two separate conversation messages on the conversation page of the social object. It is not easy for the social object to combine the voice message and the emoticon message well. Get up, it will affect the social object's understanding of the user emotion corresponding to the voice message.
  • this application obtains the user emotion corresponding to the voice message by performing voice analysis on the voice message entered by the user, and automatically generates the expression message corresponding to the voice message according to the user emotion, and uses the voice message and the expression message as an atom Conversation messages are sent to social objects and presented in the same message box in the form of atomic conversation messages on the conversation page of the social object.
  • This allows users to express their emotions more accurately and vividly, and reduces the need for users to send voice messages.
  • the operation of inputting and sending emoticons improves the efficiency of sending emoticons, reduces the cumbersomeness of sending emoticons, enhances the user experience, and can avoid sending voice messages and emoticons as two messages in a group conversation.
  • voice messages and emoticons are presented as an atomic conversation message on the conversation page of the social object, which can make the social object better
  • the voice message and emoticon message are combined to better understand the user's emotions corresponding to the voice message.
  • Fig. 1 shows a flowchart of a method for sending a session message according to an embodiment of the present application.
  • the method includes step S11, step S12, and step S13.
  • step S11 the user equipment responds to the first user’s voice input triggering operation on the conversation page to start recording the voice message;
  • step S12 the user equipment responds to the first user’s triggering operation of sending the voice message, Determine the target emoticon message corresponding to the voice message;
  • step S13 the user equipment generates an atomic conversation message, and sends the atomic conversation message to the second user communicating with the first user on the conversation page via the social server.
  • the user wherein the atomic conversation message includes the voice message and the target emoticon message.
  • step S11 the user equipment responds to a voice input trigger operation of the first user on the conversation page, and starts to record a voice message.
  • the voice input trigger operation includes, but is not limited to, clicking on the voice input button of the conversation page, pressing and holding the voice input area of the conversation page without releasing the finger, certain predetermined gesture operation, and so on. For example, the first user's finger presses and does not release the voice input area of the conversation page, and starts to record the voice message.
  • step S12 the user equipment determines a target emoticon message corresponding to the voice message in response to the first user's triggering operation of sending the voice message.
  • the sending trigger operation of the voice message includes, but is not limited to, clicking the voice sending button on the conversation page, clicking an emoticon on the conversation page, pressing the finger on the voice input area of the conversation page to start recording the voice and then releasing the finger to leave. Screen, a predetermined gesture operation, etc.
  • the target emoticon message includes but is not limited to the id corresponding to the emoticon, the url link corresponding to the emoticon, the character string generated by Base64 encoding the emoticon image, the InputStream byte input stream corresponding to the emoticon image, and the specific character string corresponding to the emoticon (for example, arrogant emoticon)
  • the corresponding specific character string is "[arrogance]"" and so on.
  • the user clicks on the voice sending button on the conversation page, and performs voice analysis on the voice message "Voice v1" that has been entered to obtain the user emotion corresponding to the voice message “Voice v1", and matching the emotion corresponding to the user emotion “Emotion e1” ", the expression “emoji e1” is used as the target expression corresponding to the voice message "voice v1", and the corresponding target expression message “e1” is generated according to the target expression "emoji e1".
  • step S13 the user equipment generates an atomic conversation message, and sends the atomic conversation message to a second user communicating with the first user on the conversation page via the social server, wherein the atomic conversation message includes all The voice message and the target emoticon message.
  • the second user may be a social user who has a one-to-one conversation with the first user, or may be multiple social users in a group conversation.
  • the first user encapsulates the voice message and the emoticon message into an atomic conversation message Sent to the second user, the voice message and emoticon message are either all successfully sent or all failed to be sent, and are presented in the same message box as an atomic conversation message on the conversation page of the second user, which can avoid being in a group conversation Sending the voice message and the emoticon message as two messages may cause the problem of being interrupted by other users' conversation messages and affecting the smoothness of the user's expression.
  • the voice message is "voice v1" and the target emoticon message is "e1”
  • an atomic conversation message "voice:'voice v1', emoticon:'e1'” is generated, and the atomic conversation message is sent to the social server through
  • the social server sends the atomic conversation message to the second user device used by the second user who communicates with the first user on the conversation page.
  • the determining the target emoticon message corresponding to the voice message includes step S121 (not shown), step S122 (not shown), and step S123 (not shown).
  • step S121 the user The device performs voice analysis on the voice message to determine the emotional feature corresponding to the voice message; in step S122, the user equipment matches and obtains the target expression corresponding to the emotional feature according to the emotional feature; in step S123, The user equipment generates a target expression message corresponding to the voice message according to the target expression.
  • the emotional characteristics include, but are not limited to, emotions such as "laugh”, “crying”, “excitement”, or a combination of multiple different emotions (for example, "crying before laughing", etc.).
  • the local cache, file, database of the user equipment or matching from the corresponding social server obtains the target expression corresponding to the emotional feature, and then generates the corresponding target expression message according to the target expression. For example, perform voice analysis on the voice message "Voice v1”, determine that the emotional feature corresponding to the voice message “Voice v1” is “excited”, and match the target expression "emoji” corresponding to the "excited” emotional feature in the local database of the user device e1", and generate the corresponding target emoticon message "e1" according to the target emoticon "emoticon e1".
  • the step S121 includes step S1211 (not shown) and step S1212 (not shown).
  • the user equipment performs voice analysis on the voice message to extract the voice information The voice feature; in step S1212, the user equipment determines the emotional feature corresponding to the voice feature according to the voice feature.
  • speech features include, but are not limited to, semantics, speech speed, intonation, and so on.
  • the user equipment performs voice analysis on the voice message "Voice v1”, and extracts the semantics of the voice message "Voice v1" as "I am so happy to pay today", the speech rate is "4 words per second", and the intonation is the previous Low to high, language momentum rises. According to semantics, speaking speed, and intonation, the emotional characteristic is determined to be "excited.”
  • the step S122 includes: the user equipment matches one or more pre-stored emotional features in the emoticon library according to the emotional feature to obtain a matching value corresponding to one or more pre-stored emotional features, wherein,
  • the expression library stores a mapping relationship between pre-stored emotional features and corresponding expressions; obtains the pre-stored emotional features with the highest matching value and the matching value reaches a predetermined matching threshold, and determines the expression corresponding to the pre-stored emotional feature as the target expression.
  • the emoticon library may be maintained by the user equipment on the user equipment side, or maintained by the server on the server side. The user equipment obtains emoticons from the response results returned by the server by sending a request to the server to obtain the emoticon library. Library.
  • the pre-stored emotional features in the expression library include “happy”, “sad”, and “fear”, and the predetermined matching threshold is 70. If the emotional feature is "excited”, match the emotional feature with the pre-stored emotional feature to obtain a match The values are 80, 10, and 20 respectively, where “happy” is the pre-stored emotional feature with the best matching value and the matching value reaches the predetermined matching threshold.
  • the expression corresponding to "happy” is determined as the target expression, or if the emotional feature is " Calm”, match the emotional feature with the pre-stored emotional feature, and the matching values obtained are 30, 20, and 10 respectively. Among them, the matching value of "happy" is the highest but the matching value does not reach the predetermined matching threshold, the matching fails and cannot be obtained The target expression corresponding to the emotional feature "excited”.
  • the step S122 includes step S1221 (not shown) and step S1222 (not shown).
  • step S1221 the user equipment matches and obtains one corresponding to the emotional feature according to the emotional feature. Or multiple expressions; in step S1222, the user equipment obtains the target expression selected by the first user from the one or more expressions. For example, according to the emotional feature "happy”, multiple expressions including “emoticon e1", “emoticon e2", and “emoticon e3" corresponding to the emotional feature "happy” are obtained by matching, and these multiple expressions are presented in the conversation On the page, the target emoticon “emoji e1” selected by the first user from the multiple emoticons is then obtained.
  • the step S1221 includes: the user equipment matches one or more pre-stored emotional features in the emoticon library according to the emotional feature to obtain each pre-stored emotion in the one or more pre-stored emotional features The matching value corresponding to the feature, wherein the expression library stores the mapping relationship between the pre-stored emotional feature and the corresponding expression; the one or more pre-stored emotional features are ranked from high to low according to the matching value corresponding to each pre-stored emotional feature
  • the expressions corresponding to the predetermined number of pre-stored emotional features arranged in the front are determined as one or more expressions corresponding to the emotional features.
  • the pre-stored emotional features in the expression library include "happy", “excited”, “sad", and "fear".
  • the emotional feature "excited” is matched with the pre-stored emotional features in the expression library, and the corresponding matching value 80 is obtained. , 90, 10, 20, arrange the pre-stored emotional features in the order of matching value from high to low to get “excited", “happy”, “fear”, and “sad”. The two pre-stored emotional features "excited” will be ranked first "And “happy” are determined as expressions corresponding to the emotional feature "excited”.
  • the voice features include but are not limited to:
  • the semantic feature includes, but is not limited to, the actual meaning of a certain voice that the computer can understand.
  • the semantic feature may be "I am happy to be paid today", “I am sad to fail an exam”, etc.
  • the speaking rate feature includes, but is not limited to, the vocabulary capacity included in a certain voice per unit time.
  • the speaking rate feature can be "4 words per second", “100 words per minute” "Wait.
  • intonation features include, but are not limited to, the rise and fall of the pitch of a certain voice, for example, flat tone, high-rise tone, lowered tone, zigzag tone, etc., among which, flat tone is a smooth and soothing tone.
  • Obvious rise and fall changes are generally used for statements, explanations and explanations without special feelings. They can also express feelings such as dignity, seriousness, grief, and indifference; high rise is low in the front and high in the back, and the language momentum rises. It is generally used to express doubts. Rhetorical questions, surprises, calls, etc.; the lowering tone is high in the front and low in the back, and the momentum gradually decreases.
  • twists and turns are intonation bending , Or first rise and then fall, or first fall and then rise, often aggravate, prolong the part that needs to be highlighted, and cause twists and turns. It is often used to express exaggeration, irony, disgust, irony, and doubt.
  • the step S13 includes: the user equipment submits to the first user a request regarding whether the target emoticon message is sent to a second user communicating with the first user on the conversation page; if The request is approved by the first user, an atomic conversation message is generated, and the atomic conversation message is sent to the second user via a social server, where the atomic conversation message includes the voice message and the target Emoticon message; if the request is rejected by the first user, the voice message is sent to the second user via a social server.
  • the text prompt message "Confirm whether to send the target emoticon message" is presented on the conversation page, and the "Confirm" button and the "Cancel” button are presented below the text prompt message.
  • the method further includes: the user equipment acquiring at least one of the personal information of the first user and one or more emoticons sent in history by the first user; wherein, the step S122 includes: According to the emotional feature and combining at least one of the personal information of the first user and one or more expressions sent by the first user in history, a target expression corresponding to the emotional feature is obtained by matching. For example, if the personal information of the first user includes "gender is female", then it will be matched first to obtain a cute target expression, or if the personal information of the first user includes "hobby is watching anime", then the first user's personal information includes "hobby is watching anime”, then it will be matched first to obtain targets with an anime style. expression.
  • the step S122 includes: the user equipment determines the emotional change trend corresponding to the emotional feature according to the emotional feature; according to the emotional change trend, matching to obtain a plurality of emotional change trends corresponding to the emotional feature Target expressions and presentation sequence information corresponding to the multiple target expressions; wherein, the step S123 includes: generating the voice message corresponding to the multiple target expressions and presentation sequence information corresponding to the multiple target expressions Target emoticon message.
  • the emotion change trend includes, but is not limited to, the change sequence of multiple emotions and the start time and duration of each emotion.
  • the presentation order information includes, but is not limited to, the time when each target expression is presented relative to the start of the voice message. Points and the length of time presented.
  • the emotional change trend is to cry first and then laugh
  • the first to fifth second of the voice message is crying
  • the sixth to tenth second of the voice message is laughter matching to obtain the target expression corresponding to crying is "emoji e1”
  • the corresponding target expression is "Expression e2”
  • the presentation order information is "Expression e1” from the 1st to the 5th second of the voice message
  • the target emoticon message corresponding to the message is "e1: 1 second to 5 seconds, e2: 6 seconds to 10 seconds".
  • Fig. 2 shows a flowchart of a method for presenting session messages according to an embodiment of the present application.
  • the method includes step S21 and step S22.
  • step S21 the user equipment receives the atomic conversation message sent by the first user via the social server, where the atomic conversation message includes the voice message of the first user and the target emoticon message corresponding to the voice message; in step S22 The user equipment presents the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page.
  • step S21 the user equipment receives an atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message. For example, receiving an atomic conversation message "voice:'voice v1', expression:'e1'" sent by the first user via the server, where the atomic conversation message includes the voice message "voice v1" and the target expression message corresponding to the voice message "E1".
  • step S22 the user equipment presents the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same conversation page.
  • Message Box the corresponding target expression is found through the target expression message, and the voice message and the target expression are displayed in the same message box.
  • the target emoticon is "e1”
  • "e1" is the id of the target emoticon. Use this id to find the corresponding target emoticon e1 locally or from the server, and display the voice message "voice v1" and the target emoticon e1 in the same A message box, where the target expression e1 can be displayed at any position in the message box relative to the voice message "Voice v1".
  • the target emoticon message is generated on the first user equipment according to the voice message.
  • the target emoticon message "e1" is automatically generated on the first user equipment according to the voice message "Voice v1".
  • the method further includes: the user equipment detects whether the voice message and the target emoticon message have been successfully received; wherein, the step S22 includes: if the voice message and the target emoticon message Have been successfully received, the atomic conversation message is presented in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page ; Otherwise, ignore the atomic conversation message.
  • the voice message "Voice v1" and the target emoticon message "e1" are successfully received, if they are received successfully, the voice message and the target emoticon message are displayed in the same message box, otherwise, if only the target emoticon message is received , If the voice message is not received, or only the voice message is received but the target emoticon message is not received, the received voice message or target emoticon message will not be displayed in the message box, and the received voice will be deleted from the user device Message or target emoticon message.
  • the display position of the target emoticon message relative to the voice message in the same message box is relative to the selected moment of the target emoticon message in the recording period information of the voice message.
  • the location matches.
  • the target emoticon message is selected after the voice message is entered. Accordingly, the target emoticon message is also displayed at the end of the voice message.
  • the target emoticon message is selected when the voice message is halfway through.
  • the target emoticon message is also displayed in the middle of the voice message.
  • the method further includes: the user equipment determines that the target emoticon message and the voice message are in the recording period information of the voice message according to the relative position of the target emoticon message at the selected moment of time.
  • the target emoticon message is selected at one-third of the time the voice message is entered, it is determined that the display position of the target emoticon message is relative to one-third of the display length of the voice message, and relative to the message box
  • the target emoticon message is displayed at a position of one third of the display length of the voice message.
  • the method further includes: the user equipment plays the atomic session message in response to the second user's play triggering operation of the atomic session message.
  • said playing the atomic conversation message may include: playing the voice message; and presenting the target emoticon message on the conversation page in a second presentation mode, wherein the target emoticon message is in the voice The message is presented in the same message box in the first presentation mode before being played. For example, if the second user clicks on the voice message presented on the conversation page, it will start to play the voice message in the atomic conversation message. At this time, if the target emoticon message has a background sound, the target emoticon message can be played while the voice message is being played. Background sound.
  • the first presentation mode includes, but is not limited to, a bubble in a message box, an icon or thumbnail in the message box, or a general indicator (for example, a small red dot) to indicate After the voice message is played, a corresponding expression will be presented.
  • the second presentation method includes but is not limited to a picture or animation displayed anywhere on the conversation page, or, it may also be a dynamic effect of a message box bubble. For example, before the voice message is played, the target emoticon message is displayed in the message box as a smaller "smile" icon. After the voice message is played, the target emoticon message is displayed in a larger "smile" picture.
  • the presentation mode is displayed in the middle of the conversation page.
  • the target emoticon message is presented on the conversation page in the form of a message box bubble.
  • the target emoticon message is displayed as a message box bubble dynamic The presentation of the effect is presented in the conversation page.
  • the second presentation mode is adapted to the current playback content or playback speed in the voice message.
  • the animation frequency of the target expression information in the second presentation mode is adapted to the current playback content or the playback speed in the voice message.
  • the target expression Information is presented with a higher animation frequency.
  • the method further includes: the user equipment responds to the second user's conversion text trigger operation of the voice message, converting the voice message into text information, wherein the target emoticon message is The display position in the text information matches the display position of the target emoticon message relative to the voice message.
  • the target emoticon message is displayed at the end of the voice message, and the user long presses the voice message, the voice message will be converted into text information, and the target emoticon message is also displayed at the end of the text message.
  • the target emoticon message is displayed in the middle of the voice message. If the user long presses the voice message, the operation menu will be displayed on the conversation page. Click the "Convert text" button in the operation menu to display the voice message Converted into text information, and the target emoticon message is also displayed in the middle of the text information.
  • the step S22 includes: the user equipment obtains multiple target expressions matching the voice message and presentation order information corresponding to the multiple target expressions according to the target expression message;
  • the atomic conversation message is presented on a conversation page between a user and a second user, wherein the multiple target emoticons are presented in the same message box in the conversation page as the voice message according to the presentation order information.
  • the target emoticon message is "e1: 1 second to 5 seconds, e2: 6 seconds to 10 seconds", where the target expression corresponding to e1 is "emoticon e1" and the target expression corresponding to e2 is "emoticon e2".
  • the target emoticons obtained by the target emoticon message that match the voice message are “emoticons e1” and “emoticons e2”, and the presentation order information is to present “emoticons e1” from the first second to the fifth second of the voice message, and at the sixth second of the voice message. From the second to the 10th second, "Emotion e2" is displayed. If the total duration of the voice message is 15 seconds, then "Emotion e1" is displayed in the message box at one-third of the display length of the voice message. "Emotion e2" is displayed at a position of two-thirds of the display length of the voice message in.
  • FIG. 3 shows a flowchart of a system method for presenting conversation messages according to some embodiments of the present application
  • step S31 the first user equipment responds to the first user's voice input triggering operation on the conversation page to start recording a voice message.
  • Step S31 is the same as or similar to the foregoing step S11, and will not be repeated here;
  • step S32 the first user equipment determines the target emoticon message corresponding to the voice message in response to the first user's triggering operation of sending the voice message, and step S32 is the same as or similar to the foregoing step S12.
  • step S33 the first user equipment generates an atomic conversation message, and sends the atomic conversation message to a second user communicating with the first user on the conversation page via the social server, Wherein, the atomic conversation message includes the voice message and the target emoticon message, and step S33 is the same as or similar to the aforementioned step S13, and will not be repeated here;
  • step S34 the second user equipment receives the first user via social media The atomic conversation message sent by the server, where the atomic conversation message includes the voice message of the first user and the target emoticon message corresponding to the voice message, step S34 is the same or similar to the foregoing step S21, and will not be repeated here;
  • step S35 the second user equipment presents the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are in the conversation page Presented in the same message box, step S35 is the same or similar to the foregoing step S22, and will not be repeated here.
  • FIG. 4 shows a device for sending a session message according to an embodiment of the present application.
  • the device includes a one-to-one module 11, a one-two module 12, and a one-three module 13.
  • a one-to-one module 11 is used to respond to the first user's voice input triggering operation on the conversation page to start recording a voice message;
  • a one-two module 12 is used to respond to the first user's triggering operation of sending the voice message, Determine the target emoticon message corresponding to the voice message;
  • the one-three module 13 is used to generate an atomic conversation message, and send the atomic conversation message to the second person who communicates with the first user on the conversation page via the social server.
  • the user wherein the atomic conversation message includes the voice message and the target emoticon message.
  • the one-to-one module 11 is used to respond to a voice input trigger operation of the first user on the conversation page to start recording a voice message.
  • the voice input trigger operation includes, but is not limited to, clicking on the voice input button of the conversation page, pressing and holding the voice input area of the conversation page without releasing the finger, certain predetermined gesture operation, and so on.
  • the first user's finger presses and does not release the voice input area of the conversation page, and starts to record the voice message.
  • the one-two module 12 is configured to determine the target emoticon message corresponding to the voice message in response to the triggering operation of the voice message sent by the first user.
  • the sending trigger operation of the voice message includes, but is not limited to, clicking the voice sending button on the conversation page, clicking an emoticon on the conversation page, pressing the finger on the voice input area of the conversation page to start recording the voice and then releasing the finger to leave. Screen, a predetermined gesture operation, etc.
  • the target emoticon message includes but is not limited to the id corresponding to the emoticon, the url link corresponding to the emoticon, the character string generated by Base64 encoding the emoticon image, the InputStream byte input stream corresponding to the emoticon image, and the specific character string corresponding to the emoticon (for example, arrogant emoticon)
  • the corresponding specific character string is "[arrogance]"" and so on.
  • the user clicks on the voice sending button on the conversation page, and performs voice analysis on the voice message "Voice v1" that has been entered to obtain the user emotion corresponding to the voice message “Voice v1", and matching the emotion corresponding to the user emotion “Emotion e1” ", the expression “emoji e1” is used as the target expression corresponding to the voice message "voice v1", and the corresponding target expression message “e1” is generated according to the target expression "emoji e1".
  • the first three module 13 is used to generate an atomic conversation message and send the atomic conversation message to a second user who communicates with the first user on the conversation page via a social server, wherein the atomic conversation message includes all The voice message and the target emoticon message.
  • the second user may be a social user who has a one-to-one conversation with the first user, or may be multiple social users in a group conversation.
  • the first user encapsulates the voice message and the emoticon message into an atomic conversation message Sent to the second user, the voice message and emoticon message are either all successfully sent or all failed to be sent, and are presented in the same message box as an atomic conversation message on the conversation page of the second user, which can avoid being in a group conversation Sending the voice message and the emoticon message as two messages may cause the problem of being interrupted by other users' conversation messages and affecting the smoothness of the user's expression.
  • the voice message is "voice v1" and the target emoticon message is "e1”
  • an atomic conversation message "voice:'voice v1', emoticon:'e1'” is generated, and the atomic conversation message is sent to the social server through
  • the social server sends the atomic conversation message to the second user device used by the second user who communicates with the first user on the conversation page.
  • the determination of the target expression message corresponding to the voice message includes a one-two-one module 121 (not shown), a one-two-two module 122 (not shown), and a one-two-three module 123 (not shown).
  • the one-two-one module 121 is used to perform voice analysis on the voice message to determine the emotional feature corresponding to the voice message;
  • the one-two-two module 122 is used to match and obtain the corresponding emotional feature according to the emotional feature
  • the one-two-three module 123 is used to generate the target expression message corresponding to the voice message according to the target expression.
  • the specific implementations of the one-two-one module 121, the one-two-two module 122, and the one-two-three module 123 are the same as or similar to the embodiment of steps S121, S122 and S123 in FIG. 1, so they will not be repeated here.
  • the citation method is included here.
  • the one-two-one module 121 includes a two-one-one module 1211 (not shown) and a two-one-two module 1212 (not shown).
  • the one-two-one-one module 121 is used to compare the voice
  • the message performs voice analysis to extract voice features in the voice information;
  • the one-two one-two module 1212 is used to determine the emotional feature corresponding to the voice feature according to the voice feature.
  • the specific implementation of the one-two-one-one module 1211 and the one-two-two module 1212 are the same as or similar to the embodiment of steps S1211 and S1212 in FIG. 1, so they will not be repeated here, and they are included here by reference.
  • the one-two-two module 122 is configured to: match one or more pre-stored emotional features in the emoticon library according to the emotional feature to obtain matching values corresponding to one or more pre-stored emotional features,
  • the expression library stores a mapping relationship between a pre-stored emotional feature and a corresponding expression; obtains the pre-stored emotional feature with the highest matching value and the matching value reaches a predetermined matching threshold, and determines the expression corresponding to the pre-stored emotional feature as the target expression .
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 1, so they will not be repeated here, and are included here by reference.
  • the one-two-two module 122 includes a one-two-two-one module 1221 (not shown) and a one-two-two-two module 1222 (not shown).
  • the one-two-two-two module 1222 is used to obtain the target expression selected by the first user from the one or more expressions.
  • the specific implementation of the one-two-two-one module 1221 and the one-two-two-two module 1222 are the same as or similar to the embodiment of steps S1221 and S1222 in FIG. 1, so they will not be repeated here, and they are included here by reference.
  • the one-two-two-one module 1221 is configured to: match one or more pre-stored emotional features in the emoticon library according to the emotional feature to obtain each of the one or more pre-stored emotional features Matching values corresponding to a pre-stored emotional feature, wherein the expression library stores the mapping relationship between the pre-stored emotional feature and the corresponding expression; the one or more pre-stored emotional features are matched according to the matching value corresponding to each pre-stored emotional feature Arrange in the order of high to low, and determine the expressions corresponding to the predetermined number of pre-stored emotional features in front as one or more expressions corresponding to the emotional features.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 1, so they will not be repeated here, and are included here by reference.
  • the voice feature includes but is not limited to:
  • the one-three module 13 is configured to: submit to the first user a request regarding whether the target emoticon message is sent to the second user communicating with the first user on the conversation page; The request is approved by the first user, an atomic conversation message is generated, and the atomic conversation message is sent to the second user via a social server, wherein the atomic conversation message includes the voice message and the target expression Message; if the request is rejected by the first user, the voice message is sent to the second user via a social server.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 1, so they will not be repeated here, and are included here by reference.
  • the device is further configured to: obtain at least one of the personal information of the first user and one or more expressions sent by the first user in history; wherein, the one-two-two module 122 It is used to: match at least one of the one or more facial expressions sent by the first user with the personal information of the first user according to the emotional characteristic and obtain a target facial expression corresponding to the emotional characteristic.
  • the related operations are the same as or similar to those in the embodiment shown in FIG. 1, so they will not be repeated here, and they are included here by reference.
  • the device is further configured to: obtain one or more expressions sent by the first user in history; wherein, the one-two-two module 122 is configured to: according to the emotional characteristics, and combine the One or more emoticons sent in history by the first user are matched to obtain a target emoticon corresponding to the emotional feature.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 1, so they will not be repeated here, and are included here by reference.
  • the one-two-two module 122 is configured to: determine the emotional change trend corresponding to the emotional feature according to the emotional feature; according to the emotional change trend, match to obtain the corresponding emotional change trend Multiple target expressions and presentation sequence information corresponding to the multiple target expressions; wherein, the one-two-three module 123 is configured to generate according to the multiple target expressions and presentation sequence information corresponding to the multiple target expressions
  • the target emoticon message corresponding to the voice message.
  • FIG. 5 shows a device for presenting session messages according to an embodiment of the present application.
  • the device includes a two-one module 21 and a two-two module 22.
  • the two-one module 21 is configured to receive an atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message; the two-two module 22 , For presenting the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same message box in the conversation page.
  • the two-to-one module 21 is configured to receive an atomic conversation message sent by a first user via a social server, where the atomic conversation message includes a voice message of the first user and a target emoticon message corresponding to the voice message. For example, receiving an atomic conversation message "voice:'voice v1', expression:'e1'" sent by the first user via the server, where the atomic conversation message includes the voice message "voice v1" and the target expression message corresponding to the voice message "E1".
  • the two-two module 22 is configured to present the atomic conversation message in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same conversation page Message Box.
  • the corresponding target expression is found through the target expression message, and the voice message and the target expression are displayed in the same message box.
  • the target emoticon is "e1”
  • "e1" is the id of the target emoticon. Use this id to find the corresponding target emoticon e1 locally or from the server, and display the voice message "voice v1" and the target emoticon e1 in the same A message box, where the target expression e1 can be displayed at any position in the message box relative to the voice message "Voice v1".
  • the target emoticon message is generated on the first user equipment according to the voice message.
  • the relevant target emoticon message is the same as or similar to the embodiment shown in FIG. 2, so it will not be repeated here, and it is included here by reference.
  • the device is further configured to: detect whether the voice message and the target emoticon message have been successfully received; wherein, the second-two module 22 is configured to: if the voice message and the target The emoticon messages have been successfully received, the atomic conversation message is presented in the conversation page of the first user and the second user, wherein the voice message and the target emoticon message are presented in the same conversation page Message box; otherwise, ignore the atomic conversation message.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 2, so they will not be repeated here, and are included here by reference.
  • the display position of the target emoticon message relative to the voice message in the same message box is relative to the selected moment of the target emoticon message in the recording period information of the voice message.
  • the location matches.
  • the relevant target emoticon message is the same as or similar to the embodiment shown in FIG. 2, so it will not be repeated here, and it is included here by reference.
  • the device is further configured to: determine that the target emoticon message and the voice message are in the same position according to the relative position of the selected moment of the target emoticon message in the recording period information of the voice message A relative positional relationship in a message box; the second-two module 22 is configured to: present the atomic conversation message in the conversation page of the first user and the second user according to the relative positional relationship, wherein the voice The message and the target emoticon message are presented in the same message box in the conversation page, and the display position of the target emoticon message relative to the voice message in the same message box corresponds to the relative position relationship match.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 2, so they will not be repeated here, and are included here by reference.
  • the device is further configured to: in response to the second user's play triggering operation of the atomic session message, play the atomic session message.
  • said playing the atomic conversation message may include: playing the voice message; and presenting the target emoticon message on the conversation page in a second presentation mode, wherein the target emoticon message is in the voice The message is presented in the same message box in the first presentation mode before being played.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 2, so they will not be repeated here, and are included here by reference.
  • the second presentation mode is adapted to the current playback content or playback speed in the voice message.
  • the related second presentation mode is the same as or similar to the embodiment shown in FIG. 2, so it will not be repeated here, and it is included here by reference.
  • the device is further configured to convert the voice message into text information in response to the second user’s textual conversion trigger operation on the voice message, wherein the target emoticon message is The display position in the text information matches the display position of the target emoticon message relative to the voice message.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 2, so they will not be repeated here, and are included here by reference.
  • the second-two module 22 is configured to: obtain, according to the target expression message, multiple target expressions matching the voice message and presentation order information corresponding to the multiple target expressions;
  • the atomic conversation message is presented on the conversation page of the first user and the second user, wherein the multiple target emoticons are presented in the same message box in the conversation page as the voice message according to the presentation order information.
  • the related operations are the same as or similar to those of the embodiment shown in FIG. 2, so they will not be repeated here, and are included here by reference.
  • Figure 6 shows an exemplary system that can be used to implement the various embodiments described in this application.
  • the system 300 can be used as any device in each of the described embodiments.
  • the system 300 may include one or more computer-readable media having instructions (for example, system memory or NVM/storage device 320) and be coupled with the one or more computer-readable media and configured to execute
  • the instructions are one or more processors (eg, processor(s) 305) that implement modules to perform the actions described in this application.
  • system control module 310 may include any suitable interface controller to provide at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310 Any appropriate interface.
  • the system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315.
  • the memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • the system memory 315 may be used to load and store data and/or instructions for the system 300, for example.
  • the system memory 315 may include any suitable volatile memory, such as a suitable DRAM.
  • the system memory 315 may include a double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • DDR4 SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 310 may include one or more input/output (I/O) controllers to provide an interface to the NVM/storage device 320 and the communication interface(s) 325.
  • I/O input/output
  • NVM/storage device 320 can be used to store data and/or instructions.
  • the NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard disk drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives.
  • HDD hard disk drives
  • CD compact disc
  • DVD digital versatile disc
  • the NVM/storage device 320 may include storage resources that are physically part of the device on which the system 300 is installed, or it may be accessed by the device and not necessarily be a part of the device. For example, the NVM/storage device 320 may be accessed via the communication interface(s) 325 through the network.
  • the communication interface(s) 325 may provide an interface for the system 300 to communicate through one or more networks and/or with any other suitable devices.
  • the system 300 can wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, the memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 to form a system in package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated with the logic of one or more controllers of the system control module 310 on the same mold. For one embodiment, at least one of the processor(s) 305 may be integrated with the logic of one or more controllers of the system control module 310 on the same mold to form a system on chip (SoC).
  • SoC system on chip
  • the system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (for example, a laptop computing device, a holding computing device, a tablet computer, a netbook, etc.).
  • the system 300 may have more or fewer components and/or different architectures.
  • the system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application also provides a computer-readable storage medium that stores computer code, and when the computer code is executed, the method described in any of the preceding items is executed.
  • the present application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in any of the preceding items is executed.
  • This application also provides a computer device, which includes:
  • One or more processors are One or more processors;
  • Memory used to store one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • this application can be implemented in software and/or a combination of software and hardware, for example, it can be implemented by an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application may be executed by a processor to realize the steps or functions described above.
  • the software program (including related data structure) of the present application can be stored in a computer-readable recording medium, such as RAM memory, magnetic or optical drive or floppy disk and similar devices.
  • some steps or functions of the present application may be implemented by hardware, for example, as a circuit that cooperates with a processor to execute each step or function.
  • the computer program instructions in the computer-readable medium include but are not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by the computer includes but not Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction before executing the corresponding post-installation program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer-readable instructions, data structures, program modules, or other data are transmitted from one system to another system.
  • Communication media can include conductive transmission media (such as cables and wires (for example, optical fiber, coaxial, etc.)) and wireless (unguided transmission) media that can propagate energy waves, such as sound, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied as, for example, a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics have been altered or set in such a way as to encode information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • a computer-readable storage medium may include volatile, non-volatile, nonvolatile, and nonvolatile, and may be implemented in any method or technology for storing information such as computer-readable instructions, data structures, program modules, or other data. Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM,
  • an embodiment according to the present application includes a device including a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, trigger
  • the operation of the device is based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Abstract

L'invention concerne un procédé et un dispositif d'envoi de message de conversation. Le procédé selon l'invention consiste : à démarrer l'enregistrement d'un message vocal, en réponse à une opération de déclenchement de saisie vocale par un premier utilisateur sur une page de conversation; à déterminer un message-émoticône cible correspondant au message vocal, en réponse au déclenchement par le premier utilisateur d'une opération d'envoi du message vocal; à générer un message de sous-conversation d'origine et, au moyen d'un serveur social, à envoyer le message de sous-conversation d'origine à un deuxième utilisateur communiquant avec le premier sur la page de conversation, le message de sous-conversation d'origine comprenant le message vocal et le message-émoticône cible. La présente invention peut permettre aux utilisateurs d'exprimer des émotions de façon plus précise et plus vivante, ce qui améliore l'efficacité d'envoi de messages-émoticônes et améliore l'expérience utilisateur. L'invention permet en outre d'éviter le problème selon lequel un message vocal et un message-émoticône sont envoyés sous la forme de deux messages distincts dans une conversation de groupe et peuvent ainsi être interrompus par des messages de conversation d'autres utilisateurs, ce qui nuit à la fluidité d'expression de l'utilisateur.
PCT/CN2020/103032 2019-07-23 2020-07-20 Procédé et dispositif d'envoi de message de conversation WO2021013126A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910667026.4A CN110311858B (zh) 2019-07-23 2019-07-23 一种发送会话消息的方法与设备
CN201910667026.4 2019-07-23

Publications (1)

Publication Number Publication Date
WO2021013126A1 true WO2021013126A1 (fr) 2021-01-28

Family

ID=68081704

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103032 WO2021013126A1 (fr) 2019-07-23 2020-07-20 Procédé et dispositif d'envoi de message de conversation

Country Status (2)

Country Link
CN (1) CN110311858B (fr)
WO (1) WO2021013126A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110311858B (zh) * 2019-07-23 2022-06-07 上海盛付通电子支付服务有限公司 一种发送会话消息的方法与设备
CN110943908A (zh) * 2019-11-05 2020-03-31 上海盛付通电子支付服务有限公司 语音消息发送方法、电子设备及介质
CN112235183B (zh) * 2020-08-29 2021-11-12 上海量明科技发展有限公司 通信消息处理方法、设备及即时通信客户端
CN114780190B (zh) * 2022-04-13 2023-12-22 脸萌有限公司 消息处理方法、装置、电子设备及存储介质
CN115460166A (zh) * 2022-09-06 2022-12-09 网易(杭州)网络有限公司 即时语音通信方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830977A (zh) * 2012-08-21 2012-12-19 上海量明科技发展有限公司 即时通信录制中添加插入型数据的方法、客户端及系统
CN105989165A (zh) * 2015-03-04 2016-10-05 深圳市腾讯计算机系统有限公司 在即时聊天工具中播放表情信息的方法、装置及系统
CN106161215A (zh) * 2016-08-31 2016-11-23 维沃移动通信有限公司 一种信息发送方法及移动终端
CN106888158A (zh) * 2017-02-28 2017-06-23 努比亚技术有限公司 一种即时通信方法和装置
CN107516533A (zh) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 一种会话信息处理方法、装置、电子设备
CN109859776A (zh) * 2017-11-30 2019-06-07 阿里巴巴集团控股有限公司 一种语音编辑方法以及装置
CN110311858A (zh) * 2019-07-23 2019-10-08 上海盛付通电子支付服务有限公司 一种发送会话消息的方法与设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383648A (zh) * 2015-07-27 2017-02-08 青岛海信电器股份有限公司 一种智能终端语音显示的方法和装置
US20170185581A1 (en) * 2015-12-29 2017-06-29 Machine Zone, Inc. Systems and methods for suggesting emoji
CN106899486B (zh) * 2016-06-22 2020-09-25 阿里巴巴集团控股有限公司 一种消息显示方法及装置
CN106789581A (zh) * 2016-12-23 2017-05-31 广州酷狗计算机科技有限公司 即时通讯方法、装置及系统
CN107040452B (zh) * 2017-02-08 2020-08-04 浙江翼信科技有限公司 一种信息处理方法、装置和计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830977A (zh) * 2012-08-21 2012-12-19 上海量明科技发展有限公司 即时通信录制中添加插入型数据的方法、客户端及系统
CN105989165A (zh) * 2015-03-04 2016-10-05 深圳市腾讯计算机系统有限公司 在即时聊天工具中播放表情信息的方法、装置及系统
CN106161215A (zh) * 2016-08-31 2016-11-23 维沃移动通信有限公司 一种信息发送方法及移动终端
CN106888158A (zh) * 2017-02-28 2017-06-23 努比亚技术有限公司 一种即时通信方法和装置
CN107516533A (zh) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 一种会话信息处理方法、装置、电子设备
CN109859776A (zh) * 2017-11-30 2019-06-07 阿里巴巴集团控股有限公司 一种语音编辑方法以及装置
CN110311858A (zh) * 2019-07-23 2019-10-08 上海盛付通电子支付服务有限公司 一种发送会话消息的方法与设备

Also Published As

Publication number Publication date
CN110311858A (zh) 2019-10-08
CN110311858B (zh) 2022-06-07

Similar Documents

Publication Publication Date Title
WO2021013126A1 (fr) Procédé et dispositif d'envoi de message de conversation
WO2021013125A1 (fr) Procédé et dispositif d'envoi de message de conversation
JP6492069B2 (ja) 環境を認識した対話ポリシーおよび応答生成
US11755296B2 (en) Computer device and method for facilitating an interactive conversational session with a digital conversational character
KR20230169052A (ko) 복수의 지능형 개인 비서 서비스를 위한 관리 계층
JP6467554B2 (ja) メッセージ送信方法、メッセージ処理方法及び端末
CN110234032B (zh) 一种语音技能创建方法及系统
US10973458B2 (en) Daily cognitive monitoring of early signs of hearing loss
KR20080019255A (ko) 상호작용 멀티미디어 프리젠테이션을 위한 상태 기초타이밍
JP2019015951A (ja) 電子機器のウェイクアップ方法、装置、デバイス及びコンピュータ可読記憶媒体
WO2022142619A1 (fr) Procédé et dispositif d'appel audio ou vidéo privé
US8868419B2 (en) Generalizing text content summary from speech content
EP3292480A1 (fr) Techniques de génération automatique de signets pour des fichiers multimédia
JP2022020659A (ja) 通話中の感情を認識し、認識された感情を活用する方法およびシステム
US10901688B2 (en) Natural language command interface for application management
WO2021218535A1 (fr) Procédés de génération et de déclenchement de commande d'interface utilisateur et terminal
WO2021147930A1 (fr) Procédé et dispositif de collage de messages
WO2023246275A1 (fr) Procédé et appareil permettant de lire un message vocal, terminal, et support de stockage
WO2024016901A1 (fr) Procédé et appareil d'invite d'informations à base de paroles, dispositif, support et produit
KR20140111574A (ko) 오디오 명령에 따른 동작을 수행하는 장치 및 방법
JP7331044B2 (ja) 情報処理方法、装置、システム、電子機器、記憶媒体およびコンピュータプログラム
WO2022142618A1 (fr) Procédé et dispositif d'exécution d'instruction au moyen d'un robot de conférence virtuelle
CN115719053A (zh) 一种呈现读物标注信息的方法与设备
CN113590871A (zh) 一种音频分类方法、装置及计算机可读存储介质
CN112667774B (zh) 一种在作品创作过程中提供创作帮助信息的方法与设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20845005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20845005

Country of ref document: EP

Kind code of ref document: A1