WO2016197767A2 - Method and device for inputting expression, terminal, and computer readable storage medium - Google Patents

Method and device for inputting expression, terminal, and computer readable storage medium Download PDF

Info

Publication number
WO2016197767A2
WO2016197767A2 PCT/CN2016/081324 CN2016081324W WO2016197767A2 WO 2016197767 A2 WO2016197767 A2 WO 2016197767A2 CN 2016081324 W CN2016081324 W CN 2016081324W WO 2016197767 A2 WO2016197767 A2 WO 2016197767A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
expression
input
session record
scene
Prior art date
Application number
PCT/CN2016/081324
Other languages
French (fr)
Chinese (zh)
Other versions
WO2016197767A3 (en
Inventor
陈军
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016197767A2 publication Critical patent/WO2016197767A2/en
Publication of WO2016197767A3 publication Critical patent/WO2016197767A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • the present application relates to, but is not limited to, the field of communications, and in particular, to an expression input method, apparatus, terminal, and computer readable storage medium.
  • the invention provides an expression input method, device, terminal and computer readable storage medium, which solves the problem that the expression input method in the related art is time consuming and laborious, and the user experience is poor.
  • An expression input method includes:
  • the session record is analyzed to obtain the emotional tendency information of the first user.
  • An expression matching the first user's sentiment orientation information in the expression library is placed in the input candidate pool.
  • the real-time chat content input by the first user through the instant messaging application is obtained in real time, and the corresponding candidate expression is obtained from the input candidate pool according to the real-time chat content for the first user to select.
  • analyzing the session record to obtain the emotional tendency information of the first user includes:
  • Semantic analysis of the text data to obtain scene features and/or emotional features of the current session Semantic analysis of the text data to obtain scene features and/or emotional features of the current session.
  • the sentiment orientation information of the first user is determined according to the scene feature and/or the emotional feature of the current session.
  • performing semantic analysis on the text data to obtain scene features and/or emotional features of the current session includes:
  • Semantic analysis is performed on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  • generating corresponding text data according to the session record includes:
  • the emoticon in the session record is converted into a corresponding tag text
  • generating corresponding text data according to the session record includes:
  • the voice content is identified to obtain a corresponding text
  • the voice content in the session record is converted into corresponding text, and the corresponding text data of the session record is generated.
  • the method further includes: prior to the step of placing the expression matching the first user's sentiment orientation information in the input candidate pool, the expressions in the expression database are given respective scene key values and/or emotions. Eigenvalues.
  • the real-time chat content input by the first user through the instant messaging application is obtained in real time, and the corresponding candidate expression is obtained by matching the real-time chat content from the input candidate pool for the first user to select:
  • An expression input device comprising:
  • the obtaining module is configured to obtain a session record of the instant messaging application, and the session record records the chat content of the first user and other users;
  • the analysis module is configured to analyze the session record to obtain the emotional tendency information of the first user
  • a first processing module configured to place an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool
  • the second processing module is configured to acquire real-time chat content input by the first user through the instant messaging application in real time, and obtain corresponding candidate expressions for selection by the first user according to the real-time chat content matching from the input candidate pool.
  • the analysis module comprises:
  • Generating a unit configured to generate corresponding text data according to the session record
  • the analyzing unit is configured to perform semantic analysis on the text data to obtain scene features and/or emotional features of the current session;
  • the first processing unit is configured to determine the sentiment orientation information of the first user according to the scene feature and/or the emotional feature of the current session.
  • the analysis unit comprises:
  • the first analysis sub-unit is configured to perform semantic analysis on the conversation subject content of the text data to obtain a scene feature of the current session, wherein different scene features have different scene key values; and/or,
  • the second analysis subunit is configured to perform semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  • the generating unit includes:
  • a first conversion subunit configured to convert an emoticon in the session record into a corresponding label text when an emoticon exists in the session record
  • the first generation subunit is configured to generate text data according to the converted session record.
  • the generating unit further includes:
  • the second conversion subunit is set to enter the voice content when the voice content exists in the session record Line recognition to get the corresponding text;
  • the second generation subunit is configured to convert the voice content in the session record into corresponding text, and generate corresponding text data of the session record.
  • the expression input device further includes:
  • the pre-processing module is configured to: after the first processing module puts the expression matching the sentiment orientation information of the first user in the expression library into the input candidate pool, assign the respective scene key values to the expressions in the expression library and/or Or emotional feature values.
  • the second processing module includes:
  • a first acquiring unit configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
  • a first calculating unit configured to calculate a similarity between the keyword and the label text or the index word of the expression in the input candidate pool
  • a first matching unit configured to match an expression with the highest similarity of keywords in the input candidate pool for the first user to select
  • a second acquiring unit configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application
  • a second processing unit configured to determine a contextual scene and/or an emotional tendency based on keywords in the live chat content
  • a second calculating unit configured to calculate a similarity between the context scene and/or the sentiment orientation and the label text or the index word of the expression in the input candidate pool
  • the second matching unit is configured to match the expression with the highest similarity to the context scene in the input candidate pool for the first user to select.
  • a terminal comprising an expression input device as described above.
  • a computer readable storage medium storing computer executable instructions that, when executed by a processor, implement the above described expression input method.
  • FIG. 1 is a schematic flow chart of an expression input method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of step S102 in FIG. 1;
  • FIG. 3 is a flow chart 1 of the step S104 of Figure 1;
  • step S104 in FIG. 1 is a second schematic diagram of the process of step S104 in FIG. 1;
  • FIG. 5 is a schematic structural diagram of an expression input device according to an embodiment of the present invention.
  • an embodiment of the present invention provides an expression input method, including steps S101 to S104:
  • Step S101 Acquire a session record of the instant messaging application.
  • the session record records the chat content of the first user and other users.
  • the session record may be the chat content between the first user and the single user, or may be the group chat content between the first user and the plurality of users. It is worth noting that in order to ensure the real-time and validity of the session record, it is necessary to select the session record in the previous preset time period at the current time.
  • Step S102 analyzing the session record to obtain the sentiment orientation information of the first user.
  • the first user's sentiment orientation information can be obtained, that is, the first user's sentiment orientation information can be determined more accurately in combination with the actual chat context scene and the first user's subjective view or feeling.
  • the use of relevant natural language semantic analysis technology can make the machine understand the human language roughly, so the semantic analysis of the session record is technically feasible.
  • Step S103 Put an expression in the expression library that matches the emotion tendency information of the first user into the input candidate pool.
  • the expression library stores the system emoticons and all the expressions downloaded or collected by the first user, and the expressions are The expressions matching the first user's sentiment orientation information in the library are placed in the input candidate pool, which can reduce the number of expressions in the input candidate pool, and even if the user manually finds the expression, the search range can be narrowed and the search efficiency can be improved. It is worth pointing out that the user can ignore the input candidate pool and find the desired expression from the original expression library.
  • Step S104 Real-time acquisition of the real-time chat content input by the first user through the instant communication application, and matching the real-time chat content from the input candidate pool to obtain the corresponding candidate expression for the first user to select.
  • the input candidate pool has provided the expression most relevant to the first user's emotional tendency for the user to select, the user can directly browse the selection in the input candidate pool, or can select the system recommended expression by inputting the real-time chat content, where the live chat is At least one index word is included in the content to facilitate expression matching.
  • chat content matches the corresponding candidate expressions, and achieves the combination of the context scene and the user's emotional tendency, thereby quickly finding the expression that the user currently needs, saving time and effort, and improving the user experience.
  • the above embodiment 1 briefly introduces the expression input method of the embodiment of the present invention.
  • the following embodiment will introduce the expression input method based on the above embodiment and the application scenario.
  • step S102 includes steps S201 to S203:
  • Step S201 Generate corresponding text data according to the session record.
  • semantic analysis is mainly for text data, but there may be emoticons or voice messages in the session records, so it is necessary to convert the session records into full-text data in order to perform accurate semantic analysis.
  • an emoticon exists in the session record
  • the emoticon in the session record is converted into a corresponding tag text; and the text data is generated according to the converted session record.
  • the label text refers to a keyword or phrase used to represent the theme feature of the emoticon when the emoticon is stored in the terminal.
  • the voice content exists in the session record, the voice content is identified to obtain the corresponding text; the voice content in the session record is converted into the corresponding text, and the corresponding text data of the session record is generated.
  • the expression and voice content in the session record are converted into text. All of the content is converted to text data.
  • Step S202 Perform semantic analysis on the text data to obtain scene features and/or emotion features of the current session.
  • Step S202 includes:
  • Semantic analysis is performed on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  • Semantic analysis of the text data of the session record can get the context represented by the context. For example, the user and the friend are traveling. In general, the experience of the tour is easy and pleasant, then the scene key value of the big scene is Can be sure. It should be noted that the scene key values corresponding to different scenes are different. For example, the scene can be roughly divided into positive, negative, and neutral. The positive scene key value is higher, and the negative scene key value is lower. The positive, neutral, and negative scene key values are sequentially reduced, while the forward scenes can also be divided into different levels, which are reflected in the scene's key value.
  • Semantic analysis of the text data of the session record can be used to know the user's subjective feelings.
  • the user usually publishes some words indicating their own opinions or opinions, including emoji, network terms, derogatory words, approval/objection.
  • the emotional eigenvalue of the feature is higher than the emotional eigenvalue of the emotional feature representing the negative emotion, and the positive emotion can also be divided into different levels, which are reflected in the level of the emotional feature value.
  • Step S203 Determine the emotional tendency information of the first user according to the scene feature and/or the emotional feature of the current session.
  • the user's sentiment tendency information can be directly determined according to the scene feature.
  • the analysis session record only gets emotional features it can be straight
  • the user's emotional tendency information is determined based on the emotional characteristics.
  • the scene feature of the current session is obtained, and the user's emotional characteristics are obtained, and the user's emotional tendency information can be determined together.
  • the scene feature of the current session can be determined, the corresponding scene key value can be obtained, and the user's emotional text in the context can be analyzed to adjust the current scene key value to obtain an accurate emotion.
  • the tendency information for example, the emotional feature representing the positive emotion may increase the scene key value, and the emotional feature indicating the negative emotion may reduce the scene key value, thereby obtaining accurate emotional tendency information.
  • the semantic analysis, analysis and analysis of the seaside tourism scene according to the scene library convention usually gives a pleasant emotional characteristics, and then according to the subjective attitude expressed in the context Enhance or weaken the emotional value. For example, if the user agrees to the view of the seaside tourism in the context, the emotional value is increased. If the user expresses the objection, the emotional value is reduced, and the user's emotional tendency of the context expression is finally determined.
  • the emotional eigenvalues of different scenes and different modal and emotional words are stored in the terminal by modeling beforehand, which is convenient for the terminal to quickly read the scene analysis and sentiment analysis.
  • the contextual text data is not In violation of user privacy settings, you can also upload to the cloud for big data analysis.
  • the expression matching the first user's sentiment orientation information in the expression database is placed in the input candidate pool.
  • the expression input method further includes: assigning respective expressions of the scene key value and/or the emotion feature value to the expressions in the expression library.
  • Each expression of the expression library is classified in advance, for example, into hi, anger, worry, thought, sadness, fear, and shock. In each category, different expressions can be given the same or different scene key values and / or emotional feature values.
  • the expression matching the user's emotional tendency information can be placed in the input candidate pool. If the user wants to actively find an expression, the user can select the contextual emotional feature in the input candidate pool. Expressions, without having to search through all the expression libraries, greatly shorten the search time.
  • Step S104 shown in FIG. 3 includes steps S301 to S303:
  • Step S301 Acquire keywords in real-time chat content input by the first user through the instant messaging application in real time.
  • the input candidate pool provides the user with the expression most relevant to the current conversation scene feature and the user emotion feature, and the user can match the corresponding expression by inputting the label text or index word of the expression to achieve fast search.
  • the user can also match the corresponding expression by inputting keywords that are related or similar to the expression tag text or index words.
  • Step S302 Calculate the similarity between the keyword and the tag text or the index word of the expression in the input candidate pool.
  • the calculation of the similarity of the label text or index word can be calculated by the distance between the word vectors in the semantic analysis method. It is obvious that the input keywords are exactly the same as the expression tags or index words, and the distance between the word vectors is zero; the characters are not identical or the semantics are similar. Then the distance between the word vectors is also small, and the matching degree is also high.
  • Step S303 Match the expression with the highest similarity with the keyword in the input candidate pool for the first user to select.
  • the simple keyword matching method cannot be used to input the expression. It is necessary to calculate the similarity between the keyword input by the user and the label text or index word of all the expressions in the input candidate pool. And matching the expression with the highest similarity with the keyword in the input candidate pool for the first user to select, so as to optimize the appropriate expression determination input.
  • the matching expressions may be one or more.
  • step S104 further includes steps S401 to S404:
  • Step S401 Acquire keywords in real-time chat content input by the first user through the instant messaging application in real time.
  • the input candidate pool provides the user with the expression most relevant to the current conversation scene feature and the user emotion feature, and the user can match the corresponding expression by inputting the label text or index word of the expression to achieve fast search.
  • the user can also match the corresponding expression by inputting a scene keyword related to or similar to the expression label text or the index word.
  • Step S402 Determine a context scene and/or an emotional tendency according to keywords in the real-time chat content.
  • the context is determined first and the user's emotional tendency is reflected.
  • Step S403 Calculate the similarity between the context scene and/or the sentiment orientation and the label text or index word of the expression in the input candidate pool.
  • the context scene reflected by the keyword needs to be// Or the user's emotional tendency is calculated similarly to the tag text or index word of the expression in the input candidate pool.
  • the similarity can be calculated by using the distance between the word vectors in the semantic analysis method. It is obvious that the words representing the scene features and/or the user's emotional features are identical to the expression label text or index words, and the distance of the word vector is zero; Characters are not identical or have similar semantics. Usually the distance between word vectors is smaller and the matching degree is higher.
  • Step S404 Match the expressions in the input candidate pool with the highest similarity to the context scene and/or the sentiment tendency for the first user to select.
  • the index word is “Happy Birthday”.
  • the user must type these four words before the expression appears to be determined by the user.
  • all expressions matching the user's emotional tendency are placed in the input candidate pool to reduce the number of candidate expressions and increase the input speed;
  • the input real-time chat content matches the corresponding candidate expressions, and achieves the combination of the context scene and the user's emotional tendency, thereby quickly finding the expression that the user currently needs, saving time and effort, and improving the user experience.
  • the expression input method provided by the embodiment of the present invention can be applied to the scene of intelligently changing the chat background image or reading the background image in addition to the application and the real-time chat process, so as to improve the user experience.
  • the embodiment 1 and the second embodiment respectively describe the expression input method of the embodiment of the present invention in combination with different application scenarios.
  • the following embodiment will perform the corresponding device with reference to the accompanying drawings. Introduction.
  • an expression input device including:
  • the obtaining module 51 is configured to obtain a session record of the instant messaging application, where the chat content of the first user and other users is recorded in the session record;
  • the analyzing module 52 is configured to analyze the session record to obtain the sentiment orientation information of the first user
  • the first processing module 53 is configured to: put an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool;
  • the second processing module 54 is configured to acquire real-time chat content input by the first user through the instant messaging application in real time, and obtain a corresponding candidate expression from the input candidate pool according to the real-time chat content for the first user to select.
  • the analysis module includes:
  • Generating a unit configured to generate corresponding text data according to the session record
  • the analyzing unit is configured to perform semantic analysis on the text data to obtain scene features and/or emotional features of the current session;
  • the first processing unit is configured to determine the sentiment orientation information of the first user according to the scene feature and/or the emotional feature of the current session.
  • the analysis unit includes:
  • the first analysis sub-unit is configured to perform semantic analysis on the conversation subject content of the text data to obtain a scene feature of the current session, wherein different scene features have different scene key values; and/or,
  • the second analysis subunit is configured to perform semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  • the generating unit includes:
  • a first conversion subunit configured to convert an emoticon in the session record into a corresponding label text when an emoticon exists in the session record
  • the first generation subunit is configured to generate text data according to the converted session record.
  • the generating unit further includes:
  • the second conversion subunit is configured to: when the voice content exists in the session record, identify the voice content to obtain a corresponding text;
  • the second generation subunit is configured to convert the voice content in the session record into corresponding text, and generate corresponding text data of the session record.
  • the expression input device further includes:
  • the pre-processing module is configured to: after the first processing module puts the expression matching the sentiment orientation information of the first user in the expression library into the input candidate pool, assign the respective scene key values to the expressions in the expression library and/or Or emotional feature values.
  • the second processing module includes:
  • a first acquiring unit configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
  • a first calculating unit configured to calculate a similarity between the keyword and the label text or the index word of the expression in the input candidate pool
  • a first matching unit configured to match an expression with the highest similarity of keywords in the input candidate pool for the first user to select
  • a second acquiring unit configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application
  • a second processing unit configured to determine a contextual scene and/or an emotional tendency based on keywords in the live chat content
  • a second calculating unit configured to calculate a similarity between the context scene and/or the sentiment orientation and the label text or the index word of the expression in the input candidate pool
  • the second matching unit is configured to match the expression with the highest similarity to the context scene in the input candidate pool for the first user to select.
  • the device is a device corresponding to the above-mentioned expression input method, and all the implementation manners in the foregoing method embodiments are applicable to the embodiment of the device, and the same technical effects can be achieved.
  • a terminal comprising the expression input device as described above.
  • the terminal is a smart device with the function of installing an instant messaging application, such as a smart phone, a tablet or a laptop.
  • a computer readable storage medium storing computer executable instructions that, when executed by a processor, implement the expression input method.
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the solution of the embodiment of the present invention determines the sentiment tendency of the current user by analyzing the obtained session record, and puts all the expressions matching the user's sentiment tendency into the input candidate pool, thereby reducing the number of candidate expressions and increasing the input speed; According to the real-time chat content input by the user, the corresponding candidate expressions are matched, and the context scene and the user's emotional tendency are combined to quickly find the expression that the user currently needs, saving time and effort, and improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Machine Translation (AREA)
  • Information Transfer Between Computers (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

Disclosed are a method and device for inputting an expression, a terminal, and a computer readable storage medium. The method comprises: obtaining a session record of an instant communication application, chat content of a first user and another user being recorded in the session record; analysing the session record, and obtaining emotional tendency information of the first user; placing expressions within an expression library which match the emotional tendency information of the first user into an input candidate pool; obtaining in real time real-time chat content inputted by the first user via the instant communication application, and according to the real-time chat content, matching and obtaining from within the input candidate pool a corresponding candidate expression for the first user to select.

Description

一种表情输入方法、装置、终端和计算机可读存储介质Expression input method, device, terminal and computer readable storage medium 技术领域Technical field
本申请涉及但不限于通信领域,尤其涉及一种表情输入方法、装置、终端和计算机可读存储介质。The present application relates to, but is not limited to, the field of communications, and in particular, to an expression input method, apparatus, terminal, and computer readable storage medium.
背景技术Background technique
在用户通过移动终端或互联网设备进行交流通信时,通常会夹杂具有表情特色的符号或图片,例如字符表情符号、Emoji表情符号、表情图片或动画等,以增添交互的趣味性。目前输入或者发送表情图片、表情符号的方式,一种是用户手动在不同主题的表情库中查找后再选择发送,一种是用户输入了对应的匹配字符串或索引词,匹配出表情图进行输入并发送。When a user communicates through a mobile terminal or an Internet device, symbols or pictures with facial expressions, such as character emoticons, Emoji emoticons, emoticons, or animations, are often added to add interactive fun. At present, the way to input or send an emoticon picture or an emoticon is that the user manually selects and searches in the expression library of different topics, and the user selects the corresponding matching string or index word to match the emoticon. Enter and send.
但是,在表情库中的大量表情中逐个查找某个合适的表情,或者直接输入正确的匹配关键词的方式,特别是在多人群聊时,等找到想要的表情图时,又收到多条信息,此时再发送表情图就可能不合时宜。相关技术中表情查找过程费时费力,都给用户的使用造成一定的麻烦,用户体验不高。However, find a suitable expression one by one in a large number of expressions in the expression library, or directly input the correct matching keyword, especially when you are in a multi-crowd chat, when you find the desired expression, you will receive more For information, sending an emoticon at this time may be out of place. In the related art, the expression search process is time-consuming and laborious, which causes certain troubles for the user's use, and the user experience is not high.
发明内容Summary of the invention
本发明提供了一种表情输入方法、装置、终端和计算机可读存储介质,解决了相关技术中表情输入方法费时费力,用户体验差的问题。The invention provides an expression input method, device, terminal and computer readable storage medium, which solves the problem that the expression input method in the related art is time consuming and laborious, and the user experience is poor.
一种表情输入方法,包括:An expression input method includes:
获取即时通信应用的会话记录,会话记录中记录有第一用户与其他用户的聊天内容。Obtain a session record of the instant messaging application, and record the chat content of the first user with other users in the session record.
对会话记录进行分析,得到第一用户的情感倾向信息。The session record is analyzed to obtain the emotional tendency information of the first user.
将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中。An expression matching the first user's sentiment orientation information in the expression library is placed in the input candidate pool.
实时获取第一用户通过即时通信应用输入的实时聊天内容,并根据实时聊天内容从输入候选池中匹配得到相应的候选表情供第一用户选择。The real-time chat content input by the first user through the instant messaging application is obtained in real time, and the corresponding candidate expression is obtained from the input candidate pool according to the real-time chat content for the first user to select.
可选地,对会话记录进行分析,得到第一用户的情感倾向信息包括:Optionally, analyzing the session record to obtain the emotional tendency information of the first user includes:
根据会话记录生成相应的文本数据。Generate corresponding text data based on the session record.
对文本数据进行语义分析,得到当前会话的场景特征和/或情感特征。 Semantic analysis of the text data to obtain scene features and/or emotional features of the current session.
根据当前会话的场景特征和/或情感特征,确定第一用户的情感倾向信息。The sentiment orientation information of the first user is determined according to the scene feature and/or the emotional feature of the current session.
可选地,对文本数据进行语义分析,得到当前会话的场景特征和/或情感特征包括:Optionally, performing semantic analysis on the text data to obtain scene features and/or emotional features of the current session includes:
对文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;和/或,Semantic analysis of the conversation subject content of the text data to obtain scene features of the current session, wherein different scene features have different scene key values; and/or,
对文本数据中第一用户输入的聊天内容进行语义分析,得到第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。Semantic analysis is performed on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
可选地,根据会话记录生成相应的文本数据包括:Optionally, generating corresponding text data according to the session record includes:
当会话记录中存在表情图时,将会话记录中的表情图转换为对应的标签文本;When an emoticon exists in the session record, the emoticon in the session record is converted into a corresponding tag text;
根据转换后的会话记录生成文本数据。Generates text data based on the converted session record.
可选地,根据会话记录生成相应的文本数据包括:Optionally, generating corresponding text data according to the session record includes:
当会话记录中存在语音内容时,对语音内容进行识别得到对应的文本;When the voice content exists in the session record, the voice content is identified to obtain a corresponding text;
将会话记录中的语音内容转换为对应的文本,生成会话记录相应的文本数据。The voice content in the session record is converted into corresponding text, and the corresponding text data of the session record is generated.
可选地,该方法还包括:将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中的步骤之前,对表情库中的表情赋予各自的场景基调值和/或情感特征值。Optionally, the method further includes: prior to the step of placing the expression matching the first user's sentiment orientation information in the input candidate pool, the expressions in the expression database are given respective scene key values and/or emotions. Eigenvalues.
可选地,实时获取第一用户通过即时通信应用输入的实时聊天内容,并根据实时聊天内容从输入候选池中匹配得到相应的候选表情供第一用户选择包括:Optionally, the real-time chat content input by the first user through the instant messaging application is obtained in real time, and the corresponding candidate expression is obtained by matching the real-time chat content from the input candidate pool for the first user to select:
实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;Acquiring keywords in real-time chat content input by the first user through the instant messaging application in real time;
计算关键词与输入候选池中的表情的标签文本或索引词的相似度;Calculating the similarity between the keyword and the tag text or index word of the expression in the input candidate pool;
匹配输入候选池中与关键词相似度最高的表情供第一用户选择;或者,Matching the expression with the highest similarity to the keyword in the input candidate pool for the first user to select; or,
实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;Acquiring keywords in real-time chat content input by the first user through the instant messaging application in real time;
根据实时聊天内容中的关键词确定上下文场景和/或情感倾向;Determining a contextual scene and/or an emotional tendency based on keywords in the live chat content;
计算上下文场景和/或情感倾向与输入候选池中的表情的标签文本或索引词的相似度; Calculating the similarity between the contextual scene and/or the sentimental tendency and the label text or index term of the expression in the input candidate pool;
匹配输入候选池中与上下文场景相似度最高的表情供第一用户选择。Matching the expression with the highest similarity to the context scene in the input candidate pool for the first user to select.
一种表情输入装置,包括:An expression input device comprising:
获取模块,设置为获取即时通信应用的会话记录,会话记录中记录有第一用户与其他用户的聊天内容;The obtaining module is configured to obtain a session record of the instant messaging application, and the session record records the chat content of the first user and other users;
分析模块,设置为对会话记录进行分析,得到第一用户的情感倾向信息;The analysis module is configured to analyze the session record to obtain the emotional tendency information of the first user;
第一处理模块,设置为将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中;a first processing module, configured to place an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool;
第二处理模块,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容,并根据实时聊天内容从输入候选池中匹配得到相应的候选表情供第一用户选择。The second processing module is configured to acquire real-time chat content input by the first user through the instant messaging application in real time, and obtain corresponding candidate expressions for selection by the first user according to the real-time chat content matching from the input candidate pool.
可选地,分析模块包括:Optionally, the analysis module comprises:
生成单元,设置为根据会话记录生成相应的文本数据;Generating a unit, configured to generate corresponding text data according to the session record;
分析单元,设置为对文本数据进行语义分析,得到当前会话的场景特征和/或情感特征;The analyzing unit is configured to perform semantic analysis on the text data to obtain scene features and/or emotional features of the current session;
第一处理单元,设置为根据当前会话的场景特征和/或情感特征,确定第一用户的情感倾向信息。The first processing unit is configured to determine the sentiment orientation information of the first user according to the scene feature and/or the emotional feature of the current session.
可选地,分析单元包括:Optionally, the analysis unit comprises:
第一分析子单元,设置为对文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;和/或,The first analysis sub-unit is configured to perform semantic analysis on the conversation subject content of the text data to obtain a scene feature of the current session, wherein different scene features have different scene key values; and/or,
第二分析子单元,设置为对文本数据中第一用户输入的聊天内容进行语义分析,得到第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。The second analysis subunit is configured to perform semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
可选地,生成单元包括:Optionally, the generating unit includes:
第一转换子单元,设置为当会话记录中存在表情图时,将会话记录中的表情图转换为对应的标签文本;a first conversion subunit, configured to convert an emoticon in the session record into a corresponding label text when an emoticon exists in the session record;
第一生成子单元,设置为根据转换后的会话记录生成文本数据。The first generation subunit is configured to generate text data according to the converted session record.
可选地,生成单元还包括:Optionally, the generating unit further includes:
第二转换子单元,设置为当会话记录中存在语音内容时,对语音内容进 行识别得到对应的文本;The second conversion subunit is set to enter the voice content when the voice content exists in the session record Line recognition to get the corresponding text;
第二生成子单元,设置为将会话记录中的语音内容转换为对应的文本,生成会话记录相应的文本数据。The second generation subunit is configured to convert the voice content in the session record into corresponding text, and generate corresponding text data of the session record.
可选地,该表情输入装置还包括:Optionally, the expression input device further includes:
预处理模块,设置为在第一处理模块将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中之前,对表情库中的表情赋予各自的场景基调值和/或情感特征值。The pre-processing module is configured to: after the first processing module puts the expression matching the sentiment orientation information of the first user in the expression library into the input candidate pool, assign the respective scene key values to the expressions in the expression library and/or Or emotional feature values.
可选地,第二处理模块包括:Optionally, the second processing module includes:
第一获取单元,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;a first acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
第一计算单元,设置为计算关键词与输入候选池中的表情的标签文本或索引词的相似度;a first calculating unit configured to calculate a similarity between the keyword and the label text or the index word of the expression in the input candidate pool;
第一匹配单元,设置为匹配输入候选池中与关键词相似度最高的表情供第一用户选择;或者,a first matching unit, configured to match an expression with the highest similarity of keywords in the input candidate pool for the first user to select; or
第二获取单元,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;a second acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
第二处理单元,设置为根据实时聊天内容中的关键词确定上下文场景和/或情感倾向;a second processing unit configured to determine a contextual scene and/or an emotional tendency based on keywords in the live chat content;
第二计算单元,设置为计算上下文场景和/或情感倾向与输入候选池中的表情的标签文本或索引词的相似度;a second calculating unit configured to calculate a similarity between the context scene and/or the sentiment orientation and the label text or the index word of the expression in the input candidate pool;
第二匹配单元,设置为匹配输入候选池中与上下文场景相似度最高的表情供第一用户选择。The second matching unit is configured to match the expression with the highest similarity to the context scene in the input candidate pool for the first user to select.
一种终端,包括如上所述的表情输入装置。A terminal comprising an expression input device as described above.
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述的表情输入方法。A computer readable storage medium storing computer executable instructions that, when executed by a processor, implement the above described expression input method.
本发明的实施例的有益效果是:The beneficial effects of embodiments of the present invention are:
通过对获取到的会话记录进行分析确定当前用户的情感倾向,将与用户情感倾向相匹配的所有表情置入输入候选池中,以减少候选表情的数目,提高输入速度;并根据用户输入的实时聊天内容匹配相应的候选表情,达到结 合上下文场景和用户情感倾向,从而快速找到用户当前所需要的表情,省时省力,提高了用户体验。By analyzing the obtained session record to determine the current user's emotional tendency, all expressions matching the user's emotional tendency are placed in the input candidate pool to reduce the number of candidate expressions and increase the input speed; and according to the user input real-time The chat content matches the corresponding candidate expression, reaching the knot The contextual scene and the user's emotional tendency, so as to quickly find the user's current expression, save time and effort, improve the user experience.
附图概述BRIEF abstract
图1表示本发明实施例的表情输入方法的流程示意图;1 is a schematic flow chart of an expression input method according to an embodiment of the present invention;
图2表示图1中步骤S102的流程示意图;2 is a schematic flow chart of step S102 in FIG. 1;
图3表示图1中步骤S104的流程示意图一;Figure 3 is a flow chart 1 of the step S104 of Figure 1;
图4表示图1中步骤S104的流程示意图二;4 is a second schematic diagram of the process of step S104 in FIG. 1;
图5表示本发明实施例的表情输入装置的结构示意图。FIG. 5 is a schematic structural diagram of an expression input device according to an embodiment of the present invention.
本发明的实施方式Embodiments of the invention
下面结合附图对本发明的实施方式进行描述。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。Embodiments of the present invention will be described below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the features in the embodiments and the embodiments in the present application may be arbitrarily combined with each other.
实施例一Embodiment 1
如图1所示,本发明实施例提供了一种表情输入方法,包括步骤S101~S104:As shown in FIG. 1 , an embodiment of the present invention provides an expression input method, including steps S101 to S104:
步骤S101:获取即时通信应用的会话记录。Step S101: Acquire a session record of the instant messaging application.
其中,会话记录中记录有第一用户与其他用户的聊天内容。该会话记录可以是第一用户与单个用户之间的聊天内容,亦可以是第一用户与多个用户之间的群聊内容。值得指出的是,为了保证会话记录的实时性和有效性,需要选取当前时刻前一预设时间段内的会话记录。The session record records the chat content of the first user and other users. The session record may be the chat content between the first user and the single user, or may be the group chat content between the first user and the plurality of users. It is worth noting that in order to ensure the real-time and validity of the session record, it is necessary to select the session record in the previous preset time period at the current time.
步骤S102:对会话记录进行分析,得到第一用户的情感倾向信息。Step S102: analyzing the session record to obtain the sentiment orientation information of the first user.
通过对会话记录进行语义分析,可以得到第一用户的情感倾向信息,即结合实际聊天的上下文场景以及第一用户的主观看法或感受,能够较为准确地确定第一用户的情感倾向信息。其中,采用相关的自然语言语义分析技术,能够让机器大致理解人类语言,因此对会话记录进行语义分析在技术上是可行的。By performing semantic analysis on the session record, the first user's sentiment orientation information can be obtained, that is, the first user's sentiment orientation information can be determined more accurately in combination with the actual chat context scene and the first user's subjective view or feeling. Among them, the use of relevant natural language semantic analysis technology can make the machine understand the human language roughly, so the semantic analysis of the session record is technically feasible.
步骤S103:将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中。Step S103: Put an expression in the expression library that matches the emotion tendency information of the first user into the input candidate pool.
表情库中存储有系统表情以及第一用户下载或收藏的全部表情,将表情 库中与第一用户的情感倾向信息相匹配的表情置入输入候选池中,可减少输入候选池中表情的数量,即使用户手动查找表情时,也可缩小查找范围,提高查找效率。值得指出的是,用户可以忽略输入候选池而从原来表情库中查找所需要的表情。The expression library stores the system emoticons and all the expressions downloaded or collected by the first user, and the expressions are The expressions matching the first user's sentiment orientation information in the library are placed in the input candidate pool, which can reduce the number of expressions in the input candidate pool, and even if the user manually finds the expression, the search range can be narrowed and the search efficiency can be improved. It is worth pointing out that the user can ignore the input candidate pool and find the desired expression from the original expression library.
步骤S104:实时获取第一用户通过即时通信应用输入的实时聊天内容,并根据实时聊天内容从输入候选池中匹配得到相应的候选表情供第一用户选择。Step S104: Real-time acquisition of the real-time chat content input by the first user through the instant communication application, and matching the real-time chat content from the input candidate pool to obtain the corresponding candidate expression for the first user to select.
输入候选池中已提供了与第一用户情感倾向最相关的表情供用户选择,用户可直接在输入候选池中浏览选择,亦可以通过输入实时聊天内容的方式选择系统推荐的表情,这里实时聊天内容中至少包括一个索引词,以便于表情匹配。The input candidate pool has provided the expression most relevant to the first user's emotional tendency for the user to select, the user can directly browse the selection in the input candidate pool, or can select the system recommended expression by inputting the real-time chat content, where the live chat is At least one index word is included in the content to facilitate expression matching.
通过对获取到的会话记录进行分析确定当前用户的情感倾向,将与用户情感倾向相匹配的所有表情置入输入候选池中,以减少候选表情的数目,提高输入速度;并根据用户输入的实时聊天内容匹配相应的候选表情,达到结合上下文场景和用户情感倾向,从而快速找到用户当前所需要的表情,省时省力,提高了用户体验。By analyzing the obtained session record to determine the current user's emotional tendency, all expressions matching the user's emotional tendency are placed in the input candidate pool to reduce the number of candidate expressions and increase the input speed; and according to the user input real-time The chat content matches the corresponding candidate expressions, and achieves the combination of the context scene and the user's emotional tendency, thereby quickly finding the expression that the user currently needs, saving time and effort, and improving the user experience.
实施例二Embodiment 2
以上实施例一简单介绍了本发明实施例的表情输入方法,下面本实施例将基于上述实施例一并结合应用场景对该表情输入方法进行介绍。The above embodiment 1 briefly introduces the expression input method of the embodiment of the present invention. The following embodiment will introduce the expression input method based on the above embodiment and the application scenario.
如图2所示,步骤S102包括步骤S201~S203:As shown in FIG. 2, step S102 includes steps S201 to S203:
步骤S201:根据会话记录生成相应的文本数据。Step S201: Generate corresponding text data according to the session record.
其中,语义分析主要是针对文本数据进行,但是会话记录中可能存在表情图或语音消息等内容,所以需要将会话记录转换为全文本数据,才能进行准确的语义分析。当会话记录中存在表情图时,将会话记录中的表情图转换为对应的标签文本;根据转换后的会话记录生成文本数据。其中,标签文本指的是一个表情图存储在终端时用来代表该表情图的主题特征的关键词或短语。当会话记录中存在语音内容时,对语音内容进行识别得到对应的文本;将会话记录中的语音内容转换为对应的文本,生成会话记录相应的文本数据。通过将会话记录中的表情图和语音内容均转换为文本后,实现了会话记录中 的全部内容均被转换为文本数据。Among them, semantic analysis is mainly for text data, but there may be emoticons or voice messages in the session records, so it is necessary to convert the session records into full-text data in order to perform accurate semantic analysis. When an emoticon exists in the session record, the emoticon in the session record is converted into a corresponding tag text; and the text data is generated according to the converted session record. The label text refers to a keyword or phrase used to represent the theme feature of the emoticon when the emoticon is stored in the terminal. When the voice content exists in the session record, the voice content is identified to obtain the corresponding text; the voice content in the session record is converted into the corresponding text, and the corresponding text data of the session record is generated. In the session record, the expression and voice content in the session record are converted into text. All of the content is converted to text data.
步骤S202:对文本数据进行语义分析,得到当前会话的场景特征和/或情感特征。Step S202: Perform semantic analysis on the text data to obtain scene features and/or emotion features of the current session.
在聊天过程中表情的主要作用是活跃聊天气氛,表达用户的主观感受,因此合适的表情时基于一定的场景以及用户自身感受的,为了保证推荐的表情是用户所需要的,就需要对会话记录的文本数据进行语义分析,得到当前会话的场景特征和/或情感特征。步骤S202包括:The main role of the expression in the chat process is to activate the chat atmosphere and express the subjective feelings of the user. Therefore, the appropriate expression is based on a certain scene and the user's own feelings. In order to ensure that the recommended expression is required by the user, the session record is required. The text data is semantically analyzed to obtain scene features and/or emotional features of the current session. Step S202 includes:
对文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;和/或,Semantic analysis of the conversation subject content of the text data to obtain scene features of the current session, wherein different scene features have different scene key values; and/or,
对文本数据中第一用户输入的聊天内容进行语义分析,得到第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。Semantic analysis is performed on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
对会话记录的文本数据进行语义分析能够得到上下文所表示的语境,例如,用户和朋友们在聊旅游的事情,一般情况下旅游给人的感受是轻松愉悦,那么大场景的场景基调值就能确定下来。需要说明的是,不同的场景所对应的场景基调值不同,例如可将场景大致分为正向、负向和中性等,正向的场景基调值高一些,负向的场景基调值低一些,即正向、中性、负向的场景基调值依次降低,而正向场景中也可分为不同等级,体现在场景基调值的高低上。Semantic analysis of the text data of the session record can get the context represented by the context. For example, the user and the friend are traveling. In general, the experience of the tour is easy and pleasant, then the scene key value of the big scene is Can be sure. It should be noted that the scene key values corresponding to different scenes are different. For example, the scene can be roughly divided into positive, negative, and neutral. The positive scene key value is higher, and the negative scene key value is lower. The positive, neutral, and negative scene key values are sequentially reduced, while the forward scenes can also be divided into different levels, which are reflected in the scene's key value.
对会话记录的文本数据进行语义分析能够获知用户的主观感受,例如在用户在聊天过程中通常会发表一些表示自己的见解或意见的文字,包括表情符号、网络用语、褒贬义词、赞同/反对等用词,或者一些直接表示赞叹、开心的用词,或表示郁闷、悲伤的用词,以得到用户的情感特征,其中不同的情感特征具有不同的情感特征值,例如表示正向情绪的情感特征的情感特征值高于表示负向情绪的情感特征的情感特征值,正向情绪中也可分为不同等级,体现在情感特征值的高低上。Semantic analysis of the text data of the session record can be used to know the user's subjective feelings. For example, during the chat process, the user usually publishes some words indicating their own opinions or opinions, including emoji, network terms, derogatory words, approval/objection. Use words, or some words that directly express praise, happiness, or words that indicate depression and sadness, in order to obtain the emotional characteristics of the user, wherein different emotional features have different emotional feature values, such as emotions expressing positive emotions. The emotional eigenvalue of the feature is higher than the emotional eigenvalue of the emotional feature representing the negative emotion, and the positive emotion can also be divided into different levels, which are reflected in the level of the emotional feature value.
步骤S203:根据当前会话的场景特征和/或情感特征,确定第一用户的情感倾向信息。Step S203: Determine the emotional tendency information of the first user according to the scene feature and/or the emotional feature of the current session.
其中,对于分析会话记录只得到场景特征的情况,可直接根据场景特征确定用户的情感倾向信息。对于分析会话记录只得到情感特征的情况,可直 接根据情感特征确定用户的情感倾向信息。对于分析会话记录记得到了当前会话的场景特征,又得到了用户的情感特征的情况,可结合两者共同确定用户的情感倾向信息。在对会话记录进行语义分析得到上下文场景后,可确定当前会话的场景特征,得到相应的场景基调值,再分析上下文中用户的情感文本,以对当前场景基调值进行调整,从而得到准确的情感倾向信息,例如,表示正向情绪的情感特征可增加场景基调值,表示负向情绪的情感特征可减低场景基调值,从而得到准确的情感倾向信息。例如,用户和朋友之间正在探讨假期到海边旅游的事情,那么进行语义分析,分析得出海边旅游这个场景,按照场景库约定通常就赋予愉悦的情感特征,再根据上下文中表达的主观态度来加强或减弱情感值,比如用户同意对上下文中海边旅游的看法,则加大情感值,如果用户表达反对意见,则减小情感值,最终确定上下文表达的用户情感倾向。Wherein, for analyzing the session record only to obtain the scene feature, the user's sentiment tendency information can be directly determined according to the scene feature. For the case where the analysis session record only gets emotional features, it can be straight The user's emotional tendency information is determined based on the emotional characteristics. For the analysis session record, the scene feature of the current session is obtained, and the user's emotional characteristics are obtained, and the user's emotional tendency information can be determined together. After semantic analysis of the session record to obtain the context scene, the scene feature of the current session can be determined, the corresponding scene key value can be obtained, and the user's emotional text in the context can be analyzed to adjust the current scene key value to obtain an accurate emotion. The tendency information, for example, the emotional feature representing the positive emotion may increase the scene key value, and the emotional feature indicating the negative emotion may reduce the scene key value, thereby obtaining accurate emotional tendency information. For example, between the user and the friend is discussing the holiday to the seaside tourism, then the semantic analysis, analysis and analysis of the seaside tourism scene, according to the scene library convention usually gives a pleasant emotional characteristics, and then according to the subjective attitude expressed in the context Enhance or weaken the emotional value. For example, if the user agrees to the view of the seaside tourism in the context, the emotional value is increased. If the user expresses the objection, the emotional value is reduced, and the user's emotional tendency of the context expression is finally determined.
值得指出的是,不同场景以及不同语气词、情感词的情感特征值是事先通过建模后存储在终端中的,便于终端能够快读进行场景分析和情感分析,同时,上下文的文本数据在不违反用户隐私设置下,还可上传至云端进行大数据分析。It is worth noting that the emotional eigenvalues of different scenes and different modal and emotional words are stored in the terminal by modeling beforehand, which is convenient for the terminal to quickly read the scene analysis and sentiment analysis. At the same time, the contextual text data is not In violation of user privacy settings, you can also upload to the cloud for big data analysis.
在得到准确的情感倾向信息后,将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中。而在此之前,该表情输入方法还包括:对表情库中的表情赋予各自的场景基调值和/或情感特征值。事先对表情库的每个表情进行分类,例如分为喜、怒、忧、思、悲、恐、惊七大类,在每类中,不同的表情可以被赋予相同或不同的场景基调值和/或情感特征值。当得到准确的情感倾向信息后,可将符合用户情感倾向信息的表情置入输入候选池中,如果用户想主动找一个表情,就能在该输入候选池中较快地选择切合上下文情感特征的表情,而无需在所有表情库中一一寻找,大大缩短了查找时间。After obtaining the accurate sentiment orientation information, the expression matching the first user's sentiment orientation information in the expression database is placed in the input candidate pool. Before this, the expression input method further includes: assigning respective expressions of the scene key value and/or the emotion feature value to the expressions in the expression library. Each expression of the expression library is classified in advance, for example, into hi, anger, worry, thought, sadness, fear, and shock. In each category, different expressions can be given the same or different scene key values and / or emotional feature values. After obtaining the accurate sentiment orientation information, the expression matching the user's emotional tendency information can be placed in the input candidate pool. If the user wants to actively find an expression, the user can select the contextual emotional feature in the input candidate pool. Expressions, without having to search through all the expression libraries, greatly shorten the search time.
此外,用户除了主动查找需要的表情外,还可以通过输入实时聊天内容的方式选择系统推荐的表情,这里实时聊天内容中至少包括一个索引词,以便于表情匹配。如图3所示步骤S104包括步骤S301~S303:In addition, in addition to actively finding the desired expression, the user can also select the system-recommended expression by inputting the real-time chat content, where the real-time chat content includes at least one index word to facilitate the expression matching. Step S104 shown in FIG. 3 includes steps S301 to S303:
步骤S301:实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词。 Step S301: Acquire keywords in real-time chat content input by the first user through the instant messaging application in real time.
输入候选池中为用户提供了与当前对话场景特征和用户情感特征最相关的表情,用户可通过输入表情的标签文本或索引词来匹配相应的表情,以实现快速查找。用户还可以通过输入与表情标签文本或索引词相关或相近的关键词的方式来匹配相应的表情。The input candidate pool provides the user with the expression most relevant to the current conversation scene feature and the user emotion feature, and the user can match the corresponding expression by inputting the label text or index word of the expression to achieve fast search. The user can also match the corresponding expression by inputting keywords that are related or similar to the expression tag text or index words.
步骤S302:计算关键词与输入候选池中的表情的标签文本或索引词的相似度。Step S302: Calculate the similarity between the keyword and the tag text or the index word of the expression in the input candidate pool.
由于用户不一定能够正确记住表情的标签文本或索引词,可能输入了相近但不准确的关键词,因此在得到用户输入的关键词后,需要将该关键词与输入候选池中的表情的标签文本或索引词的做相似度的计算。相似度可以采用语义分析方法中词向量之间的距离来计算表示,很显然输入关键词和表情标签或索引词完全相同,则词向量之间的距离为零;字符不完全相同或者语义相近,则词向量之间的距离也很小,匹配度也较高。Since the user may not be able to correctly remember the label text or index word of the expression, a similar but inaccurate keyword may be input, so after obtaining the keyword input by the user, the keyword needs to be input with the expression in the candidate pool. The calculation of the similarity of the label text or index word. The similarity can be calculated by the distance between the word vectors in the semantic analysis method. It is obvious that the input keywords are exactly the same as the expression tags or index words, and the distance between the word vectors is zero; the characters are not identical or the semantics are similar. Then the distance between the word vectors is also small, and the matching degree is also high.
步骤S303:匹配输入候选池中与关键词相似度最高的表情供第一用户选择。Step S303: Match the expression with the highest similarity with the keyword in the input candidate pool for the first user to select.
为了避免用户输入了相近但不匹配的关键词,无法使用简单的关键字匹配法输入表情的情况,需要对用户输入的关键词与输入候选池内所有表情的标签文本或索引词做相似度计算,并匹配输入候选池中与关键词相似度最高的表情供第一用户选择,从而优选合适的表情确定输入。其中,匹配得到的表情可以是一个或多个。In order to prevent the user from inputting similar but unmatched keywords, the simple keyword matching method cannot be used to input the expression. It is necessary to calculate the similarity between the keyword input by the user and the label text or index word of all the expressions in the input candidate pool. And matching the expression with the highest similarity with the keyword in the input candidate pool for the first user to select, so as to optimize the appropriate expression determination input. The matching expressions may be one or more.
或者,如图4所示,步骤S104还包括步骤S401~S404:Alternatively, as shown in FIG. 4, step S104 further includes steps S401 to S404:
步骤S401:实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词。Step S401: Acquire keywords in real-time chat content input by the first user through the instant messaging application in real time.
输入候选池中为用户提供了与当前对话场景特征和用户情感特征最相关的表情,用户可通过输入表情的标签文本或索引词来匹配相应的表情,以实现快速查找。用户还可以通过输入与表情标签文本或索引词相关或相近的场景关键词的方式来匹配相应的表情。The input candidate pool provides the user with the expression most relevant to the current conversation scene feature and the user emotion feature, and the user can match the corresponding expression by inputting the label text or index word of the expression to achieve fast search. The user can also match the corresponding expression by inputting a scene keyword related to or similar to the expression label text or the index word.
步骤S402:根据实时聊天内容中的关键词确定上下文场景和/或情感倾向。Step S402: Determine a context scene and/or an emotional tendency according to keywords in the real-time chat content.
由于同一关键词可能对应的上下文场景不同,因此可通过上下文语义分 析先确定关键词所反映的上下文场景和/或用户的情感倾向。Since the same keyword may have different context scenarios, it can be separated by context semantics. The context is determined first and the user's emotional tendency is reflected.
步骤S403:计算上下文场景和/或情感倾向与输入候选池中的表情的标签文本或索引词的相似度。Step S403: Calculate the similarity between the context scene and/or the sentiment orientation and the label text or index word of the expression in the input candidate pool.
由于用户不一定能够正确记住表情的标签文本或索引词,可能输入了其他字面上不相关的关键词,因此在得到用户输入的关键词后,需要将该关键词所反映的上下文场景和/或用户的情感倾向与输入候选池中的表情的标签文本或索引词的做相似度的计算。相似度可以采用语义分析方法中词向量之间的距离来计算表示,很显然表示场景特征和/或用户情感特征的字词和表情标签文本或索引词完全相同,则词向量的距离为零;字符不完全相同或者语义相近,通常词向量之间的距离也较小,匹配度就越高。Since the user may not be able to correctly remember the label text or index word of the expression, other keywords that are not related to the keyboard may be input. Therefore, after obtaining the keyword input by the user, the context scene reflected by the keyword needs to be// Or the user's emotional tendency is calculated similarly to the tag text or index word of the expression in the input candidate pool. The similarity can be calculated by using the distance between the word vectors in the semantic analysis method. It is obvious that the words representing the scene features and/or the user's emotional features are identical to the expression label text or index words, and the distance of the word vector is zero; Characters are not identical or have similar semantics. Usually the distance between word vectors is smaller and the matching degree is higher.
步骤S404:匹配输入候选池中与上下文场景和/或情感倾向相似度最高的表情供第一用户选择。Step S404: Match the expressions in the input candidate pool with the highest similarity to the context scene and/or the sentiment tendency for the first user to select.
例如:输入候选池中有一表情,其索引词是“生日快乐”,按照严格的匹配法则,用户必须键入这四个字后才能出现该表情待用户确定使用。而实时计算用户键入的字词和索引词的相似度,当用户键入“生日”或“生日快”时,即可匹配出该表情图;或者,根据已经获取到的上下文场景,已经分析出是谈论有关“生日”的话题,那么结合该场景和上下文情感倾向,当用户键入“快乐”时,即可将表达生日快乐的表情推荐给用户。For example, there is an expression in the input candidate pool, and the index word is “Happy Birthday”. According to the strict matching rule, the user must type these four words before the expression appears to be determined by the user. Real-time calculation of the similarity between the word typed by the user and the index word, when the user types "birthday" or "fast birthday", the emoticon can be matched; or, according to the context scene that has been acquired, it has been analyzed Talking about the topic of "birthday", then combining the scene and contextual sentiment, when the user types "happy", the expression expressing happy birthday can be recommended to the user.
综上,通过对获取到的会话记录进行分析确定当前用户的情感倾向,将与用户情感倾向相匹配的所有表情置入输入候选池中,以减少候选表情的数目,提高输入速度;并根据用户输入的实时聊天内容匹配相应的候选表情,达到结合上下文场景和用户情感倾向,从而快速找到用户当前所需要的表情,省时省力,提高了用户体验。In summary, by analyzing the obtained session record to determine the current user's emotional tendency, all expressions matching the user's emotional tendency are placed in the input candidate pool to reduce the number of candidate expressions and increase the input speed; The input real-time chat content matches the corresponding candidate expressions, and achieves the combination of the context scene and the user's emotional tendency, thereby quickly finding the expression that the user currently needs, saving time and effort, and improving the user experience.
此外,本发明实施例所提供的表情输入方法除了应用与实时聊天过程中,还可应用于智能更换聊天背景图像或阅读背景图像的场景中,以提高用户的使用体验。In addition, the expression input method provided by the embodiment of the present invention can be applied to the scene of intelligently changing the chat background image or reading the background image in addition to the application and the real-time chat process, so as to improve the user experience.
实施例三Embodiment 3
以上实施例一和实施例二分别结合不同的应用场景对本发明实施例的表情输入方法进行了详细介绍,下面本实施例将结合附图对其对应的装置进行 介绍。The embodiment 1 and the second embodiment respectively describe the expression input method of the embodiment of the present invention in combination with different application scenarios. The following embodiment will perform the corresponding device with reference to the accompanying drawings. Introduction.
如图5所示,本发明实施例的另一个方面,还提供了一种表情输入装置,包括:As shown in FIG. 5, another aspect of an embodiment of the present invention further provides an expression input device, including:
获取模块51,设置为获取即时通信应用的会话记录,会话记录中记录有第一用户与其他用户的聊天内容;The obtaining module 51 is configured to obtain a session record of the instant messaging application, where the chat content of the first user and other users is recorded in the session record;
分析模块52,设置为对会话记录进行分析,得到第一用户的情感倾向信息;The analyzing module 52 is configured to analyze the session record to obtain the sentiment orientation information of the first user;
第一处理模块53,设置为将表情库中与第一用户的情感倾向信息匹配的表情置入输入候选池中;The first processing module 53 is configured to: put an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool;
第二处理模块54,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容,并根据实时聊天内容从输入候选池中匹配得到相应的候选表情供第一用户选择。The second processing module 54 is configured to acquire real-time chat content input by the first user through the instant messaging application in real time, and obtain a corresponding candidate expression from the input candidate pool according to the real-time chat content for the first user to select.
其中,分析模块包括:Among them, the analysis module includes:
生成单元,设置为根据会话记录生成相应的文本数据;Generating a unit, configured to generate corresponding text data according to the session record;
分析单元,设置为对文本数据进行语义分析,得到当前会话的场景特征和/或情感特征;The analyzing unit is configured to perform semantic analysis on the text data to obtain scene features and/or emotional features of the current session;
第一处理单元,设置为根据当前会话的场景特征和/或情感特征,确定第一用户的情感倾向信息。The first processing unit is configured to determine the sentiment orientation information of the first user according to the scene feature and/or the emotional feature of the current session.
其中,分析单元包括:Among them, the analysis unit includes:
第一分析子单元,设置为对文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;和/或,The first analysis sub-unit is configured to perform semantic analysis on the conversation subject content of the text data to obtain a scene feature of the current session, wherein different scene features have different scene key values; and/or,
第二分析子单元,设置为对文本数据中第一用户输入的聊天内容进行语义分析,得到第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。The second analysis subunit is configured to perform semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
其中,生成单元包括:The generating unit includes:
第一转换子单元,设置为当会话记录中存在表情图时,将会话记录中的表情图转换为对应的标签文本;a first conversion subunit, configured to convert an emoticon in the session record into a corresponding label text when an emoticon exists in the session record;
第一生成子单元,设置为根据转换后的会话记录生成文本数据。 The first generation subunit is configured to generate text data according to the converted session record.
其中,生成单元还包括:The generating unit further includes:
第二转换子单元,设置为当会话记录中存在语音内容时,对语音内容进行识别得到对应的文本;The second conversion subunit is configured to: when the voice content exists in the session record, identify the voice content to obtain a corresponding text;
第二生成子单元,设置为将会话记录中的语音内容转换为对应的文本,生成会话记录相应的文本数据。The second generation subunit is configured to convert the voice content in the session record into corresponding text, and generate corresponding text data of the session record.
其中,该表情输入装置还包括:The expression input device further includes:
预处理模块,设置为在第一处理模块将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中之前,对表情库中的表情赋予各自的场景基调值和/或情感特征值。The pre-processing module is configured to: after the first processing module puts the expression matching the sentiment orientation information of the first user in the expression library into the input candidate pool, assign the respective scene key values to the expressions in the expression library and/or Or emotional feature values.
其中,第二处理模块包括:The second processing module includes:
第一获取单元,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;a first acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
第一计算单元,设置为计算关键词与输入候选池中的表情的标签文本或索引词的相似度;a first calculating unit configured to calculate a similarity between the keyword and the label text or the index word of the expression in the input candidate pool;
第一匹配单元,设置为匹配输入候选池中与关键词相似度最高的表情供第一用户选择;或者,a first matching unit, configured to match an expression with the highest similarity of keywords in the input candidate pool for the first user to select; or
第二获取单元,设置为实时获取第一用户通过即时通信应用输入的实时聊天内容中的关键词;a second acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
第二处理单元,设置为根据实时聊天内容中的关键词确定上下文场景和/或情感倾向;a second processing unit configured to determine a contextual scene and/or an emotional tendency based on keywords in the live chat content;
第二计算单元,设置为计算上下文场景和/或情感倾向与输入候选池中的表情的标签文本或索引词的相似度;a second calculating unit configured to calculate a similarity between the context scene and/or the sentiment orientation and the label text or the index word of the expression in the input candidate pool;
第二匹配单元,设置为匹配输入候选池中与上下文场景相似度最高的表情供第一用户选择。The second matching unit is configured to match the expression with the highest similarity to the context scene in the input candidate pool for the first user to select.
需要说明的是,该装置是与上述表情输入方法对应的装置,上述方法实施例中所有实现方式均适用于该装置的实施例中,也能达到相同的技术效果。It should be noted that the device is a device corresponding to the above-mentioned expression input method, and all the implementation manners in the foregoing method embodiments are applicable to the embodiment of the device, and the same technical effects can be achieved.
依据本发明实施例的再一个方面,还提供了一种终端,包括如上所述的表情输入装置。该终端为具有安装即时通信应用功能的智能设备,如智能手机、平板电脑或笔记本电脑等。 According to still another aspect of an embodiment of the present invention, there is also provided a terminal comprising the expression input device as described above. The terminal is a smart device with the function of installing an instant messaging application, such as a smart phone, a tablet or a laptop.
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现所述的表情输入方法。A computer readable storage medium storing computer executable instructions that, when executed by a processor, implement the expression input method.
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(如系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art will appreciate that all or a portion of the steps of the above-described embodiments can be implemented using a computer program flow, which can be stored in a computer readable storage medium, such as on a corresponding hardware platform (eg, The system, device, device, device, etc. are executed, and when executed, include one or a combination of the steps of the method embodiments.
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。Alternatively, all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。The devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. The above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
工业实用性Industrial applicability
本发明实施例方案通过对获取到的会话记录进行分析确定当前用户的情感倾向,将与用户情感倾向相匹配的所有表情置入输入候选池中,以减少候选表情的数目,提高输入速度;并根据用户输入的实时聊天内容匹配相应的候选表情,达到结合上下文场景和用户情感倾向,从而快速找到用户当前所需要的表情,省时省力,提高了用户体验。 The solution of the embodiment of the present invention determines the sentiment tendency of the current user by analyzing the obtained session record, and puts all the expressions matching the user's sentiment tendency into the input candidate pool, thereby reducing the number of candidate expressions and increasing the input speed; According to the real-time chat content input by the user, the corresponding candidate expressions are matched, and the context scene and the user's emotional tendency are combined to quickly find the expression that the user currently needs, saving time and effort, and improving the user experience.

Claims (16)

  1. 一种表情输入方法,包括:An expression input method includes:
    获取即时通信应用的会话记录,所述会话记录中记录有第一用户与其他用户的聊天内容;Obtaining a session record of the instant messaging application, where the chat content of the first user and other users is recorded in the session record;
    对所述会话记录进行分析,得到所述第一用户的情感倾向信息;Performing analysis on the session record to obtain sentiment orientation information of the first user;
    将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中;Forming an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool;
    实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容,并根据所述实时聊天内容从所述输入候选池中匹配得到相应的候选表情供所述第一用户选择。Real-time chat content input by the first user through the instant messaging application is obtained in real time, and a corresponding candidate expression is obtained from the input candidate pool according to the real-time chat content for the first user to select.
  2. 根据权利要求1所述的表情输入方法,其中,所述对所述会话记录进行分析,得到所述第一用户的情感倾向信息包括:The expression input method according to claim 1, wherein the analyzing the conversation record to obtain the emotional tendency information of the first user comprises:
    根据所述会话记录生成相应的文本数据;Generating corresponding text data according to the session record;
    对所述文本数据进行语义分析,得到当前会话的场景特征和/或情感特征;Performing semantic analysis on the text data to obtain scene features and/or emotional features of the current session;
    根据所述当前会话的场景特征和/或情感特征,确定所述第一用户的情感倾向信息。Determining the emotional tendency information of the first user according to the scene feature and/or the emotional feature of the current session.
  3. 根据权利要求2所述的表情输入方法,其中,所述对所述文本数据进行语义分析,得到当前会话的场景特征和/或情感特征包括:The expression input method according to claim 2, wherein the semantic analysis of the text data to obtain scene features and/or emotion features of the current session comprises:
    对所述文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;和/或,Performing semantic analysis on the conversation subject content of the text data to obtain scene features of the current session, wherein different scene features have different scene key values; and/or,
    对所述文本数据中第一用户输入的聊天内容进行语义分析,得到所述第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。Performing semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  4. 根据权利要求2所述的表情输入方法,其中,所述根据所述会话记录生成相应的文本数据包括:The expression input method according to claim 2, wherein the generating the corresponding text data according to the session record comprises:
    当所述会话记录中存在表情图时,将所述会话记录中的表情图转换为对应的标签文本;Converting an emoticon in the session record to a corresponding tag text when an emoticon exists in the session record;
    根据转换后的会话记录生成所述文本数据。 The text data is generated based on the converted session record.
  5. 根据权利要求4所述的表情输入方法,其中,所述根据所述会话记录生成相应的文本数据还包括:The expression input method according to claim 4, wherein the generating the corresponding text data according to the session record further comprises:
    当所述会话记录中存在语音内容时,对所述语音内容进行识别得到对应的文本;When the voice content exists in the session record, the voice content is identified to obtain a corresponding text;
    将所述会话记录中的语音内容转换为对应的文本,生成所述会话记录相应的文本数据。Translating the voice content in the session record into corresponding text, and generating corresponding text data of the session record.
  6. 根据权利要求1所述的表情输入方法,所述方法还包括:在将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中之前,对表情库中的表情赋予各自的场景基调值和/或情感特征值。The expression input method according to claim 1, further comprising: assigning an expression in the expression library before placing an expression matching the emotion preference information of the first user in the expression library into the input candidate pool Individual scene key values and/or emotional feature values.
  7. 根据权利要求1所述的表情输入方法,其中,所述实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容,并根据所述实时聊天内容从所述输入候选池中匹配得到相应的候选表情供所述第一用户选择的步骤包括:The expression input method according to claim 1, wherein the real-time chat content input by the first user through the instant messaging application is acquired in real time, and is matched from the input candidate pool according to the real-time chat content. The steps of the corresponding candidate expression for the first user selection include:
    实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容中的关键词;Acquiring keywords in the real-time chat content input by the first user through the instant messaging application in real time;
    计算所述关键词与所述输入候选池中的表情的标签文本或索引词的相似度;Calculating a similarity between the keyword and a tag text or an index word of an expression in the input candidate pool;
    匹配所述输入候选池中与所述关键词相似度最高的表情供所述第一用户选择;或者,Matching an expression of the input candidate pool with the highest similarity to the keyword for selection by the first user; or
    实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容中的关键词;Acquiring keywords in the real-time chat content input by the first user through the instant messaging application in real time;
    根据所述实时聊天内容中的关键词确定上下文场景和/或情感倾向;Determining a contextual scene and/or an emotional tendency based on keywords in the live chat content;
    计算所述上下文场景和/或情感倾向与所述输入候选池中的表情的标签文本或索引词的相似度;Calculating a similarity between the contextual scene and/or sentiment orientation and the tag text or index term of the expression in the input candidate pool;
    匹配所述输入候选池中与所述上下文场景相似度最高的表情供所述第一用户选择。Matching an expression in the input candidate pool that is the most similar to the context scenario for the first user to select.
  8. 一种表情输入装置,包括:An expression input device comprising:
    获取模块,设置为获取即时通信应用的会话记录,所述会话记录中记录有第一用户与其他用户的聊天内容; An obtaining module, configured to obtain a session record of the instant messaging application, where the chat content of the first user and other users is recorded in the session record;
    分析模块,设置为对所述会话记录进行分析,得到所述第一用户的情感倾向信息;An analysis module, configured to analyze the session record to obtain sentiment orientation information of the first user;
    第一处理模块,设置为将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中;a first processing module, configured to place an expression in the expression library that matches the sentiment orientation information of the first user into the input candidate pool;
    第二处理模块,设置为实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容,并根据所述实时聊天内容从所述输入候选池中匹配得到相应的候选表情供所述第一用户选择。a second processing module, configured to acquire real-time chat content input by the first user through the instant messaging application, and obtain corresponding candidate expressions from the input candidate pool according to the real-time chat content for the first A user chooses.
  9. 根据权利要求8所述的表情输入装置,其中,所述分析模块包括:The expression input device of claim 8, wherein the analysis module comprises:
    生成单元,设置为根据所述会话记录生成相应的文本数据;Generating unit, configured to generate corresponding text data according to the session record;
    分析单元,设置为对所述文本数据进行语义分析,得到当前会话的场景特征和/或情感特征;An analyzing unit, configured to perform semantic analysis on the text data to obtain a scene feature and/or an emotional feature of the current session;
    第一处理单元,设置为根据所述当前会话的场景特征和/或情感特征,确定所述第一用户的情感倾向信息。The first processing unit is configured to determine the sentiment orientation information of the first user according to the scene feature and/or the emotional feature of the current session.
  10. 根据权利要求9所述的表情输入装置,其中,所述分析单元包括:The expression input device according to claim 9, wherein the analysis unit comprises:
    第一分析子单元,设置为对所述文本数据的会话主题内容进行语义分析,得到当前会话的场景特征,其中,不同的场景特征具有不同的场景基调值;a first analysis subunit, configured to perform semantic analysis on the session content of the text data to obtain a scene feature of the current session, wherein different scene features have different scene key values;
    第二分析子单元,设置为对所述文本数据中第一用户输入的聊天内容进行语义分析,得到所述第一用户的情感特征,其中,不同的情感特征具有不同的情感特征值。The second analysis subunit is configured to perform semantic analysis on the chat content input by the first user in the text data to obtain an emotional feature of the first user, wherein different emotional features have different emotional feature values.
  11. 根据权利要求9所述的表情输入装置,其中,所述生成单元包括:The expression input device according to claim 9, wherein the generating unit comprises:
    第一转换子单元,设置为当所述会话记录中存在表情图时,将所述会话记录中的表情图转换为对应的标签文本;a first conversion subunit, configured to convert an emoticogram in the session record into a corresponding label text when an emoticon exists in the session record;
    第一生成子单元,设置为根据转换后的会话记录生成所述文本数据。The first generation subunit is configured to generate the text data according to the converted session record.
  12. 根据权利要求9所述的表情输入装置,其中,所述生成单元还包括:The expression input device according to claim 9, wherein the generating unit further comprises:
    第二转换子单元,设置为当所述会话记录中存在语音内容时,对所述语音内容进行识别得到对应的文本;a second conversion subunit, configured to: when the voice content exists in the session record, identify the voice content to obtain a corresponding text;
    第二生成子单元,设置为将所述会话记录中的语音内容转换为对应的文本,生成所述会话记录相应的文本数据。The second generation subunit is configured to convert the voice content in the session record into corresponding text, and generate corresponding text data of the session record.
  13. 根据权利要求8所述的表情输入装置,所述装置还包括: The expression input device of claim 8, the device further comprising:
    预处理模块,设置为在第一处理模块将表情库中与所述第一用户的情感倾向信息匹配的表情置入输入候选池中之前,对表情库中的表情赋予各自的场景基调值和/或情感特征值。The pre-processing module is configured to: after the first processing module puts the expression matching the sentiment orientation information of the first user in the expression library into the input candidate pool, assign the respective scene key values to the expressions in the expression library and/or Or emotional feature values.
  14. 根据权利要求8所述的表情输入装置,其中,所述第二处理模块包括:The expression input device of claim 8, wherein the second processing module comprises:
    第一获取单元,设置为实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容中的关键词;a first acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
    第一计算单元,设置为计算所述关键词与所述输入候选池中的表情的标签文本或索引词的相似度;a first calculating unit, configured to calculate a similarity between the keyword and a label text or an index word of an expression in the input candidate pool;
    第一匹配单元,设置为匹配所述输入候选池中与所述关键词相似度最高的表情供所述第一用户选择;a first matching unit, configured to match an expression that is the most similar to the keyword in the input candidate pool for the first user to select;
    第二获取单元,设置为实时获取所述第一用户通过所述即时通信应用输入的实时聊天内容中的关键词;a second acquiring unit, configured to acquire, in real time, a keyword in the real-time chat content input by the first user through the instant messaging application;
    第二处理单元,设置为根据所述实时聊天内容中的关键词确定上下文场景和/或情感倾向a second processing unit configured to determine a contextual scene and/or an emotional tendency based on keywords in the live chat content
    第二计算单元,设置为计算所述上下文场景和/或情感倾向与所述输入候选池中的表情的标签文本或索引词的相似度;a second calculating unit, configured to calculate a similarity between the context scene and/or the sentiment orientation and the label text or the index word of the expression in the input candidate pool;
    第二匹配单元,设置为匹配所述输入候选池中与所述上下文场景相似度最高的表情供所述第一用户选择。And a second matching unit, configured to match an expression in the input candidate pool that is the most similar to the context scenario for the first user to select.
  15. 一种终端,包括如权利要求8~14任一项所述的表情输入装置。A terminal comprising the expression input device according to any one of claims 8 to 14.
  16. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至7任意一项所述的表情输入方法。 A computer readable storage medium storing computer executable instructions that, when executed by a processor, implement the expression input method of any one of claims 1 to 7.
PCT/CN2016/081324 2016-02-16 2016-05-06 Method and device for inputting expression, terminal, and computer readable storage medium WO2016197767A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610087658.XA CN107423277A (en) 2016-02-16 2016-02-16 A kind of expression input method, device and terminal
CN201610087658.X 2016-02-16

Publications (2)

Publication Number Publication Date
WO2016197767A2 true WO2016197767A2 (en) 2016-12-15
WO2016197767A3 WO2016197767A3 (en) 2017-02-02

Family

ID=57502902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081324 WO2016197767A2 (en) 2016-02-16 2016-05-06 Method and device for inputting expression, terminal, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN107423277A (en)
WO (1) WO2016197767A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN109814730A (en) * 2017-11-20 2019-05-28 北京搜狗科技发展有限公司 Input method and device, the device for input
CN110895558A (en) * 2018-08-23 2020-03-20 北京搜狗科技发展有限公司 Dialog reply method and related device
CN111190493A (en) * 2018-11-15 2020-05-22 中兴通讯股份有限公司 Expression input method, device, equipment and storage medium
CN111835621A (en) * 2020-07-10 2020-10-27 腾讯科技(深圳)有限公司 Session message processing method and device, computer equipment and readable storage medium
CN112784011A (en) * 2021-01-04 2021-05-11 南威软件股份有限公司 Emotional problem processing method, device and medium based on CNN and LSTM
CN115378890A (en) * 2022-08-12 2022-11-22 腾讯科技(武汉)有限公司 Information input method, information input device, storage medium and computer equipment

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958505B (en) * 2018-05-24 2023-05-05 维沃移动通信有限公司 Method and terminal for displaying candidate information
CN109088811A (en) * 2018-06-25 2018-12-25 维沃移动通信有限公司 A kind of method for sending information and mobile terminal
CN108809817A (en) * 2018-07-06 2018-11-13 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment, Cloud Server and the communication means of vehicle-mounted instant chat
CN109408658A (en) * 2018-08-23 2019-03-01 平安科技(深圳)有限公司 Expression picture reminding method, device, computer equipment and storage medium
CN109918675A (en) * 2019-03-15 2019-06-21 福建工程学院 A kind of the network expression picture automatic generation method and device of context-aware
CN109977409A (en) * 2019-03-28 2019-07-05 北京科技大学 A kind of intelligent expression recommended method and system based on user's chat habit
CN110162191A (en) * 2019-04-03 2019-08-23 腾讯科技(深圳)有限公司 A kind of expression recommended method, device and storage medium
CN110163121B (en) * 2019-04-30 2023-09-05 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110232116B (en) * 2019-05-31 2021-07-27 腾讯科技(深圳)有限公司 Method and device for adding expressions in reply sentence
CN110633361B (en) * 2019-09-26 2023-05-02 联想(北京)有限公司 Input control method and device and intelligent session server
CN110968204A (en) * 2019-09-29 2020-04-07 上海萌家网络科技有限公司 Input method and system thereof
CN110674330B (en) * 2019-09-30 2024-01-09 北京达佳互联信息技术有限公司 Expression management method and device, electronic equipment and storage medium
CN110717109B (en) * 2019-09-30 2024-03-15 北京达佳互联信息技术有限公司 Method, device, electronic equipment and storage medium for recommending data
CN113051427A (en) * 2019-12-10 2021-06-29 华为技术有限公司 Expression making method and device
CN111291184B (en) * 2020-01-20 2023-07-18 百度在线网络技术(北京)有限公司 Expression recommendation method, device, equipment and storage medium
CN111491123A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video background processing method and device and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251853A (en) * 2008-02-20 2008-08-27 魔极科技(北京)有限公司 System and method for digging user attribute based on user interactive records
JP5688230B2 (en) * 2010-03-24 2015-03-25 任天堂株式会社 INPUT PROGRAM, INPUT DEVICE, SYSTEM, AND INPUT METHOD
CN102262624A (en) * 2011-08-08 2011-11-30 中国科学院自动化研究所 System and method for realizing cross-language communication based on multi-mode assistance
US9195645B2 (en) * 2012-07-30 2015-11-24 Microsoft Technology Licensing, Llc Generating string predictions using contexts
CN103488796B (en) * 2013-10-12 2018-12-07 惠州Tcl移动通信有限公司 Based on context the method and mobile terminal inputted
CN104394057B (en) * 2013-11-04 2017-11-17 贵阳朗玛信息技术股份有限公司 Expression recommends method and device
CN104951428B (en) * 2014-03-26 2019-04-16 阿里巴巴集团控股有限公司 User's intension recognizing method and device
CN104063427A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN104076944B (en) * 2014-06-06 2017-03-01 北京搜狗科技发展有限公司 A kind of method and apparatus of chatting facial expression input
CN104133855B (en) * 2014-07-11 2017-12-19 中安消技术有限公司 A kind of method and device of input method intelligent association
CN104298429B (en) * 2014-09-25 2018-05-04 北京搜狗科技发展有限公司 A kind of information displaying method and input method system based on input
CN107104881B (en) * 2015-05-29 2021-09-14 北京搜狗科技发展有限公司 Information processing method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814730A (en) * 2017-11-20 2019-05-28 北京搜狗科技发展有限公司 Input method and device, the device for input
CN109814730B (en) * 2017-11-20 2023-09-12 北京搜狗科技发展有限公司 Input method and device and input device
CN110895558A (en) * 2018-08-23 2020-03-20 北京搜狗科技发展有限公司 Dialog reply method and related device
CN110895558B (en) * 2018-08-23 2024-01-30 北京搜狗科技发展有限公司 Dialogue reply method and related device
CN111190493A (en) * 2018-11-15 2020-05-22 中兴通讯股份有限公司 Expression input method, device, equipment and storage medium
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN111835621A (en) * 2020-07-10 2020-10-27 腾讯科技(深圳)有限公司 Session message processing method and device, computer equipment and readable storage medium
CN112784011A (en) * 2021-01-04 2021-05-11 南威软件股份有限公司 Emotional problem processing method, device and medium based on CNN and LSTM
CN112784011B (en) * 2021-01-04 2023-06-30 南威软件股份有限公司 Emotion problem processing method, device and medium based on CNN and LSTM
CN115378890A (en) * 2022-08-12 2022-11-22 腾讯科技(武汉)有限公司 Information input method, information input device, storage medium and computer equipment
CN115378890B (en) * 2022-08-12 2023-08-18 腾讯科技(武汉)有限公司 Information input method, device, storage medium and computer equipment

Also Published As

Publication number Publication date
WO2016197767A3 (en) 2017-02-02
CN107423277A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
WO2016197767A2 (en) Method and device for inputting expression, terminal, and computer readable storage medium
JP7127106B2 (en) Question answering process, language model training method, apparatus, equipment and storage medium
US11704501B2 (en) Providing a response in a session
CN106776544B (en) Character relation recognition method and device and word segmentation method
US9740677B2 (en) Methods and systems for analyzing communication situation based on dialogue act information
CN108304439B (en) Semantic model optimization method and device, intelligent device and storage medium
US9792903B2 (en) Methods and systems for determining ranking of dialogue sticker based on situation and preference information
US10831796B2 (en) Tone optimization for digital content
US9792279B2 (en) Methods and systems for analyzing communication situation based on emotion information
US9626152B2 (en) Methods and systems for recommending responsive sticker
WO2019084810A1 (en) Information processing method and terminal, and computer storage medium
US9792909B2 (en) Methods and systems for recommending dialogue sticker based on similar situation detection
CN116127020A (en) Method for training generated large language model and searching method based on model
CN112818109B (en) Intelligent reply method, medium, device and computing equipment for mail
CN116012481B (en) Image generation processing method and device, electronic equipment and storage medium
US11943181B2 (en) Personality reply for digital content
CN110895656B (en) Text similarity calculation method and device, electronic equipment and storage medium
CN110175323A (en) Method and device for generating message abstract
CN111639162A (en) Information interaction method and device, electronic equipment and storage medium
WO2019085625A1 (en) Emotion picture recommendation method and apparatus
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN114064943A (en) Conference management method, conference management device, storage medium and electronic equipment
CN113111658A (en) Method, device, equipment and storage medium for checking information
WO2020199590A1 (en) Mood detection analysis method and related device
CN115378890B (en) Information input method, device, storage medium and computer equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16806665

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16806665

Country of ref document: EP

Kind code of ref document: A2