CN111291184A - Expression recommendation method, device, equipment and storage medium - Google Patents

Expression recommendation method, device, equipment and storage medium Download PDF

Info

Publication number
CN111291184A
CN111291184A CN202010067850.9A CN202010067850A CN111291184A CN 111291184 A CN111291184 A CN 111291184A CN 202010067850 A CN202010067850 A CN 202010067850A CN 111291184 A CN111291184 A CN 111291184A
Authority
CN
China
Prior art keywords
emotion
expression
user
text content
retrieval result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010067850.9A
Other languages
Chinese (zh)
Other versions
CN111291184B (en
Inventor
向静
徐相龙
甘小楚
高菲
李国洪
李慧琴
李世操
麻雪云
李红涛
吕居美
杨佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010067850.9A priority Critical patent/CN111291184B/en
Publication of CN111291184A publication Critical patent/CN111291184A/en
Application granted granted Critical
Publication of CN111291184B publication Critical patent/CN111291184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The method comprises the steps of after text content input by a user is obtained, analyzing the text content to obtain emotion and emotion score expressed by the text content, obtaining at least one expression corresponding to the emotion when the emotion score is larger than a preset score threshold value, and recommending the at least one expression to the user. The emotion expressed by the user can be judged based on the emotion understanding in the characters, after the emotion reaches a certain degree, expression association is carried out based on the emotion, the emotion expression input by the user is enhanced, direct text content recommendation is avoided, expression content is enriched, and expression recommendation effect is improved.

Description

Expression recommendation method, device, equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method, an apparatus, a device, and a storage medium for recommending an expression in the field of data recommendation.
Background
With the development of internet technology, in the current text input scene (e.g., feed comments and chat conversation process), more and more users are willing to use an emoticon to enrich the expression of information, especially when a social type Application (APP) emerges, the users communicate with other users through the social APP and also relate to the communication of emoticons (also called emoticons or emoticons), and the requirement of emoticons recommendation according to the content input by the users also comes.
In the process of inputting characters by a user, a system recommends a proper expression, so that the operation cost of finding a picture by the user can be reduced.
However, keyword association is performed based on the characters input by the user, and the emoticon represents content related to the characters, so that the user has a low desire to use emoticons and a poor emotion recommendation effect.
Disclosure of Invention
The embodiment of the application provides an expression recommendation method, device, equipment and storage medium, which are used for solving the problems that in the prior art, keyword association is performed based on characters input by a user, and the expression graph represents content related to the characters, so that the use desire of the user for expressions is low, and the expression recommendation effect is poor.
In a first aspect, the present application provides a method for recommending an expression, including:
acquiring text content input by a user;
analyzing and processing the text content to obtain the emotion and emotion score expressed by the text content;
when the emotion score is larger than a preset score threshold value, acquiring at least one expression corresponding to the emotion;
recommending the at least one expression to the user.
In a possible embodiment, the obtaining at least one expression corresponding to the emotion includes:
generalizing the emotion to obtain at least one emotion word representing the emotion;
and performing expression retrieval by adopting the at least one emotion word to obtain the at least one expression.
In one possible embodiment, the at least one emotive word comprises: synonyms representing the mood, and hypernyms and hyponyms.
In a possible implementation manner, the expression retrieval is performed by using the at least one emotion word, so as to obtain the at least one expression, including;
retrieving a retrieval result from an expression database according to the at least one emotion word, wherein the retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the retrieval result by adopting a semantic matching model, and acquiring the at least one expression with the score larger than a preset value in the retrieval result; the semantic matching model is obtained based on deep learning training and can be used for scoring the similarity degree between the semantics expressed in the expression and the at least one emotion word.
In a possible implementation manner, the expression retrieval is performed by using the at least one emotion word, so as to obtain the at least one expression, including;
according to the at least one emotion word, retrieving a first retrieval result from an expression database, wherein the first retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; the semantic matching model is obtained based on deep learning training and can be used for scoring the similarity degree between the semantics expressed in the expression and the at least one emotion word;
searching from an expression database by adopting a picture similarity technology according to the first expression to obtain a second search result, wherein the second search result comprises a plurality of expressions;
analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value;
and obtaining the at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
In a possible implementation manner, the analyzing the text content to obtain an emotion expressed by the text content and an emotion score includes:
inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion expressed by the text content and the emotion score;
the emotion classification model is a model which is obtained based on deep learning training and can be used for carrying out emotion classification on characters to obtain emotions and corresponding emotion scores.
In a possible implementation manner, the acquiring text content input by a user includes:
receiving the text content input by the user and sent by the user terminal through the application client;
correspondingly, the recommending the at least one expression to the user includes:
and sending the at least one expression to the user terminal through the application client for displaying.
In a second aspect, the present application provides an expression recommendation apparatus, including:
the acquisition module is used for acquiring text content input by a user;
the processing module is used for analyzing and processing the text content to obtain the emotion and emotion score expressed by the text content;
the processing module is further used for acquiring at least one expression corresponding to the emotion when the emotion score is larger than a preset score threshold;
and the recommending module is used for recommending the at least one expression to the user.
In a possible implementation, the processing module is specifically configured to:
generalizing the emotion to obtain at least one emotion word representing the emotion;
and performing expression retrieval by adopting the at least one emotion word to obtain the at least one expression.
Optionally, the at least one emotional word comprises: synonyms representing the mood, and hypernyms and hyponyms.
In a possible embodiment, the processing module is specifically configured to;
retrieving a retrieval result from an expression database according to the at least one emotion word, wherein the retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the retrieval result by adopting a semantic matching model, and acquiring the at least one expression with the score larger than a preset value in the retrieval result; the semantic matching model is obtained based on deep learning training and can be used for scoring the similarity degree between the semantics expressed in the expression and the at least one emotion word.
In a possible embodiment, the processing module is specifically configured to;
according to the at least one emotion word, retrieving a first retrieval result from an expression database, wherein the first retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; the semantic matching model is obtained based on deep learning training and can be used for scoring the similarity degree between the semantics expressed in the expression and the at least one emotion word;
searching from an expression database by adopting a picture similarity technology according to the first expression to obtain a second search result, wherein the second search result comprises a plurality of expressions;
analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value;
and obtaining the at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
In a possible implementation, the processing module is further specifically configured to:
inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion expressed by the text content and the emotion score;
the emotion classification model is a model which is obtained based on deep learning training and can be used for carrying out emotion classification on characters to obtain emotions and corresponding emotion scores.
In a possible implementation manner, the obtaining module is specifically configured to:
receiving the text content input by the user and sent by the user terminal through the application client;
correspondingly, the recommendation module is specifically configured to:
and sending the at least one expression to the user terminal through the application client for displaying.
In a third aspect, the present application provides an electronic device, comprising:
at least one processor, a memory, and a communication interface for communicating with a user terminal;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of recommending expressions provided by any one of the first aspects.
In a fourth aspect, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for recommending an expression provided in any one of the first aspect.
In a fifth aspect, the present application further provides a data processing method, including:
acquiring the emotion of a user expressed by text content based on the text content input by the user;
and recommending at least one expression to the user according to the emotion of the user.
One embodiment in the above application has the following advantages or benefits: after the text content input by the user is obtained, analyzing the text content to obtain the emotion and the emotion score expressed by the text content, when the emotion score is larger than a preset score threshold value, obtaining at least one expression corresponding to the emotion, and recommending the at least one expression to the user. The emotion expressed by the user can be judged based on the emotion understanding in the characters, after the emotion reaches a certain degree, expression association is carried out based on the emotion, the emotion expression input by the user is enhanced, direct text content recommendation is avoided, expression content is enriched, and expression recommendation effect is improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is an application scenario of a method for recommending expressions provided in the present application;
fig. 2 is another application scenario of the expression recommendation method provided in the present application;
fig. 3 is a flowchart of a first embodiment of an expression recommendation method provided in the present application;
fig. 4 is a flowchart of a second embodiment of an expression recommendation method provided in the present application;
fig. 5 is a schematic structural diagram of a first expression recommendation device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing an expression recommendation method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The expression recommendation method used in the current industry is based on simple keyword association of characters input by a user, the characters input by the user are not understood, the recommendation effect is poor, and the use desire of the user is low. The result of expression recommendation has a larger optimization space. Specifically, the existing solutions have some problems as follows:
1) some of the user requirements are not met. In a sending scene of matching characters with pictures, a user not only wants to convert input characters into pictures without change, but also needs to enhance the emotion of inputting characters by himself through expressions by 60% of users in a comment input scene through research and analysis (for example, the user inputs 'the weather is really good to today' and then matches an 'happy' expression picture to set off the mood of himself), and keyword expression association based on the input characters is difficult to achieve;
2) and the information repeats. The characters input by the user are the same as the characters on the expression recommended by the system, namely the input characters are 'the weather is really good-', and the characters on the expression are probably 'the weather is really good-', so that when the images and the texts are sent together, the expression of expression information content is highly repeated, and the expression of the user information is not enriched;
3) lack of surprise. Based on the expression recommendations in the user input text, the user has a more stable result expectation and lacks unexpected result recommendations, resulting in mediocre products;
4) the recommended frequency is high, disturbing the user. The situation recommendation is not triggered, any input content can give out the recommended expression, the display frequency is high, the user is disturbed, and the user experience is poor.
In summary, in the existing scheme for recommending expressions based on association of words input by a user, the recommended expressions are also contents related to the words, so that the user desire is low, and the expression recommendation effect is poor.
Based on the problems in the prior art, the application provides an expression recommendation method, and in view of the fact that the meaning specifically expressed in characters and the emotion of a user are not analyzed only according to the way of recommending the characters at present, a new thought is provided in the scheme, the text content input by the user is analyzed to obtain the emotion of the user, and expression recommendation is performed according to the emotion when the emotion reaches a certain degree. Specifically, the emotion expressed by the user may be obtained based on the text content input by the user, and then one or more emotions may be recommended to the user according to the emotion. The recommendation words adopted in the specific recommendation process are synonyms or near-synonyms and the like which are obtained according to emotion generalization processing and represent the emotion, the problem that the expression obtained by directly adopting keywords in text content for retrieval is only repeated of characters is solved, and the expression recommendation effect is improved.
Fig. 1 is an application scenario of a method for recommending expressions provided in the present application; as shown in fig. 1, the expression recommendation method provided by the present application can be implemented on a user terminal, that is, an expression database is stored in advance in a user terminal device, and the user terminal device has a data processing function, and after a user inputs text content, the user terminal performs self-analysis, and recommends an expression according to the scheme of the present application.
Fig. 2 is another application scenario of the expression recommendation method provided in the present application; as shown in fig. 2, in this scenario, the expression recommendation method provided by the present application is applied between a user terminal and a server, a user inputs text content through the user terminal, the server receives the text content, performs analysis processing, recalls a relevant expression, and recommends the expression to the user through the user terminal.
In the above two scenarios, the user terminal may be an intelligent terminal such as a mobile phone, a notebook computer, or a personal computer PC, which can install an Application (APP) for inputting text content, and the server refers to a device capable of performing data analysis processing and retrieving in other storage devices or storing an expression database itself, for example, a cloud server, an input method server, and a server of an application program, and the present disclosure is not limited thereto.
The following describes a method for recommending an expression provided by the present application with a specific embodiment.
Fig. 3 is a flowchart of a first expression recommendation method provided in the present application, and as shown in fig. 3, the expression recommendation method specifically includes the following steps:
s101: and acquiring the text content input by the user.
In this step, the expression recommendation is generally applied to a process that a user communicates with another user through characters, for an execution subject performing expression recommendation, first, text content input by the user needs to be acquired, a mode of inputting the text content by the user may specifically be inputting in a mode of directly editing characters, or may be inputting through voice, and a system converts voice into a text after acquiring the voice, which is not limited to this scheme. In the specific time limit of the scheme, the user may input characters in a pre-installed APP or in a self-installed social APP, specifically, the user may edit characters or input voices by using a system input method or other input methods capable of being downloaded and installed, and finally obtain texts in the input contents, which is not limited in the present application.
If the first scenario is, the text content input by the user may be directly obtained, and if the second scenario is, the server needs to obtain the text content input by the user through an application client such as an application program or an input method, that is, the user terminal sends the text content to the server through an interface provided by the application client.
S102: and analyzing and processing the text content to obtain the emotion and emotion score expressed by the text content.
In this step, the text content needs to be analyzed, and the emotion expressed by the text content input by the user and the degree of the emotion are analyzed, for example: "today weather is really good to" means that the emotion of the user is more positive and happy, and "today weather still to" means that the emotion of the user is general and the state is more common.
The condition expressed in the text content and the emotion score can be obtained through a semantic understanding mode, and the text content can be analyzed through a special model for emotion classification by a user.
In a specific implementation of the scheme, the obtained text content may be input into an emotion classification model for emotion classification processing, and the model may directly input an emotion expressed in the text content and a score of the emotion.
The emotion classification model is a model which can perform emotion classification processing on characters to obtain emotions and corresponding emotion scores, and can be obtained based on deep learning training in general. Specifically, a server or equipment for model training constructs a massive query-inventory _ tag corpus by grabbing and cleaning; the corpus is combined with deep learning models such as textcnn, bert and the like to generate an emotion classification model with a plurality of classifications, for example, an emotion classification model capable of 25 emotion classifications is obtained, and the emotions include: happy, angry, sad … …, etc. In the scheme application, only the text content (namely query) input by the user needs to be fed into the emotion classification model, and the emotion classification and emotion score of the text content can be output by the model.
S103: and when the emotion score is larger than a preset score threshold value, acquiring at least one expression corresponding to the emotion.
In this step, whether expression recommendation is performed or not is determined according to the degree of emotion, and when the degree of emotion expressed by the characters input by the user is smaller than a certain value, an expression recommendation process is not performed, that is, after the emotion of the user is acquired, expression recommendation is performed only when the emotion meets a certain condition, and whether the user needs to use an expression is further determined.
In a specific implementation manner, when the emotion score is greater than the preset score threshold, the emotion of the user is considered to have reached a certain degree, and at this time, the emotion recommendation process may be further performed by means of expression. The scheme is different from the prior art in that expression retrieval is not performed according to text content, but at least one expression capable of expressing the emotion is acquired according to the obtained emotion (namely emotion classification) for recommendation.
Specifically, when the emotion is happy and happy, all expressions indicating the happy emotion may be retrieved from the expression database, and then sorted according to a certain rule, and one or more of them ranked in the top may be selected as the expressions to be recommended to the user. The ranking rule may be that the similarity between the expression and the emotion is from large to small, the frequency of the expression being used is from high to low, or the creation time of the expression is from near to far, and the like.
S104: at least one expression is recommended to the user.
In this step, the obtained at least one expression is sent to the user terminal or directly displayed on the user terminal to recommend the expression.
In the expression recommendation method provided by this embodiment, after text content input by a user is acquired, the text content is analyzed to obtain emotion and emotion score expressed by the text content, when the emotion score is greater than a preset score threshold, at least one expression corresponding to the emotion is acquired, and then the at least one expression is recommended to the user. The emotion expressed by the user can be judged based on the emotion understanding in the characters, after the emotion reaches a certain degree, expression association is carried out based on the emotion, the emotion expression input by the user is enhanced, direct text content recommendation is avoided, expression content is enriched, and expression recommendation effect is improved.
Fig. 4 is a flowchart of a second embodiment of an expression recommendation method provided in the present application, and as shown in fig. 4, the expression recommendation method provided in this embodiment specifically includes the following steps:
s201: and receiving text content input by a user and sent by the user terminal through the application client.
In the scheme, the execution main body is a server, and the server acquires the text content input by the user through an application client installed on the user terminal. On the user terminal side, after acquiring text content input by a user, the user terminal sends the text content to a server through an application client, where the application client may be a client of a social software application, a client of an input method application, or the like.
S202: and inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion and emotion score expressed by the text content.
In the step, the server acquires an emotion classification model in advance, inputs the received text content into the emotion classification model, and outputs corresponding emotion classification and emotion score.
For example: the user inputs the text of 'the weather is really good to' today, namely the text content is 'the weather is really good to' today, and the emotion is 'forward' > > > 'happy'.
Further, in the implementation of the scheme, the degree of emotion needs to be judged, that is, the judgment step can be triggered, and only the emotional input content is subjected to associative triggering.
Specifically, the emotion classification model not only outputs emotion classification, but also outputs emotion corresponding scores, namely emotion scores, which are representations of emotion intensity. The threshold is determined through a number of experiments and emotions exceeding the threshold are considered as emotion seed words that can be triggered by association. For example: and (3) comparing the emotion score of the character ' today ' is really good-good ' input by the user with the set emotion score threshold value, judging that the system is suitable for expression recommendation by the system, and performing a subsequent processing process, otherwise, not performing the expression recommendation.
S203: and generalizing the emotion to obtain at least one emotion word representing the emotion.
In this step, the generalized at least one emotion word includes: synonyms representing the mood, and hypernyms and hyponyms.
In order to meet the demand of recall diversification, synonyms similar to emotion semantics and superior words and inferior words closely related to emotion words are generalized from each emotion (also called emotion words and emotion seed words) through machine learning technologies such as word2vec and crf, and a search word, namely at least one emotion word is reconstructed.
For example: the user inputs characters ' today ' weather is really good- ', the emotional words are ' happy ', and the emotional words with the same semantics such as ' nice ', ' happy ', ' excited ', and the like are generalized through ' happy '.
S204: and performing expression retrieval by adopting at least one emotion word to obtain at least one expression.
In the step, according to one or more emotion words obtained by generalization, performing expression retrieval on an expression database or a network to obtain a plurality of expressions matched with the emotion words, and then selecting at least one expression required to be recommended to the user.
Specifically, the steps at least comprise the following implementation modes:
in a first implementation, all expressions that represent distraction can be retrieved from the expression database, and then sorted according to a certain rule, and one or more of the expressions ranked in the top are selected as the expressions to be recommended to the user. The ranking rule may be that the similarity between the expression and the emotion is from large to small, the frequency of the expression being used is from high to low, or the creation time of the expression is from near to far, and the like.
In a second implementation manner, a retrieval result is retrieved from an expression database according to the at least one emotion word, the retrieval result comprises a plurality of expressions, then a semantic matching model is adopted to process and score the expressions in the retrieval result, and the at least one expression with the score larger than a preset value in the retrieval result is obtained; the semantic matching model is a model capable of scoring the similarity between the semantics expressed in the expression and the at least one emotion word, and can be obtained based on deep learning training generally.
In the method, a new search word (namely, the at least one emotion word) consisting of the generalized synonym and the superior/inferior word is searched, the search result is recalled through a classical information search mode (inverted index + BM25), then the recall result is ranked by using a deep learning-based semantic matching model, and one or more expressions with the highest semantic similarity are selected as expressions recommended to a user.
In a third implementation manner, a first retrieval result is retrieved from an expression database according to the at least one emotion word, wherein the first retrieval result comprises a plurality of expressions; processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; the semantic matching model is obtained based on deep learning training and can be used for scoring the similarity degree between the semantics expressed in the expression and the at least one emotion word.
Then, according to the first expression, searching from an expression database by adopting a picture similarity technology to obtain a second search result, wherein the second search result comprises a plurality of expressions; and analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value.
And finally, obtaining at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
In the implementation mode, a new search word composed of the generalized synonyms and the superior/inferior words is used for initiating a search, the search result is recalled through a classical information search mode (inverted index + BM25), then the recall result is subjected to score sorting by using a deep learning-based semantic matching model, and one or more expressions with the highest semantic similarity are selected as a first part of a returned result, namely the at least one first expression is selected.
And then, based on the first expressions, initiating secondary retrieval in an expression database or a network by adopting a picture similarity technology, and retrieving pictures similar to the first expressions, namely other expressions. And recognizing the emotion of the text surrounding the similar picture by utilizing an emotion understanding technology, judging the emotion coincidence degree of the emotion of the picture and the emotion of the text content input by the user, and taking one or more expressions with the highest emotion coincidence degree as a second part of a returned result, namely the obtained at least one second expression. And finally, the result of integrating the first part and the second part is returned to the user, namely, at least one first expression and at least one second expression are collected to obtain at least one expression to be recommended to the user finally.
The method for selecting the expression to be recommended finally according to the first expression and the second expression may select a part of each of the two sets, or may reselect one or more expressions with highest emotion similarity from the whole set, which is not limited to this scheme.
For example: recalling the emotion words of ' happy ', ' nice ', ' happy ', ' excited ' and ' related emoticons, then carrying out overall sequencing and recommending to the user.
S205: and sending the at least one expression to the user terminal through the application client for displaying.
In this embodiment, since the user uses the user terminal, after the server obtains the expression to be recommended, the expression needs to be returned to the user terminal for display through the client of the application program, where the application program may be a client of a social software application, a client of an input method application, or the like.
According to the expression recommendation method provided by each embodiment of the application, the emotion of the user is considered when the expression recommendation is carried out, the emotion can be accurately judged based on the emotion understanding of the input text content, and then the expression matching is carried out on the emotion, so that the emotion expression input by the user is enhanced.
The recommendation method can also enrich information expression, in particular, based on emotion expression association, the recommended expression content is only emotionally associated with the original input text, the difference between the recommended expression content and the original input text content is larger, the situation of the same text cannot occur, and therefore the information content of the user when the user sends the pictures and texts is enriched. And a trigger control link is added in the scheme, and the accurate recommendation of the expression graph can be realized by performing trigger adjustment on the judged emotion threshold value, so that the situation that a user is disturbed too much is avoided, and the recommendation frequency is controlled.
In addition, the expression recommendation method provided by the application has a surprise, a specific user does not have stable expectation of a result on the expression recommendation any more, and an unexpected expression result is recommended more easily through accurate judgment on the emotion of the user, so that the surprise experience of a product in an input process is increased.
Fig. 5 is a schematic structural diagram of a first expression recommendation device provided in the present application. As shown in fig. 5, the emotion recommending apparatus may be integrated in or implemented by an electronic device, which may be a server, a cloud server, a computer, a mobile phone, or the like, without limitation. The facial expression recommendation device 10 includes:
the acquiring module 11 is used for acquiring text content input by a user;
the processing module 12 is configured to analyze and process the text content to obtain an emotion and an emotion score expressed by the text content;
the processing module 12 is further configured to obtain at least one expression corresponding to the emotion when the emotion score is greater than a preset score threshold;
and the recommending module 13 is configured to recommend the at least one expression to the user.
In a specific implementation manner, the processing module 12 is specifically configured to:
generalizing the emotion to obtain at least one emotion word representing the emotion;
and performing expression retrieval by adopting the at least one emotion word to obtain the at least one expression.
Optionally, the at least one emotional word comprises: synonyms representing the mood, and hypernyms and hyponyms.
In a specific implementation, the processing module 12 is specifically configured to;
retrieving a retrieval result from an expression database according to the at least one emotion word, wherein the retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the retrieval result by adopting a semantic matching model, and acquiring the at least one expression with the score larger than a preset value in the retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in the emotion and the at least one emotion word.
Optionally, the processing module 12 is specifically configured to;
according to the at least one emotion word, retrieving a first retrieval result from an expression database, wherein the first retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in an emotion and the at least one emotion word;
searching from an expression database by adopting a picture similarity technology according to the first expression to obtain a second search result, wherein the second search result comprises a plurality of expressions;
analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value;
and obtaining the at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
Optionally, the processing module 12 is further specifically configured to:
inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion expressed by the text content and the emotion score;
the emotion classification model is a model which is obtained based on deep learning training and can be used for carrying out emotion classification on characters to obtain emotions and corresponding emotion scores.
Optionally, the obtaining module 11 is specifically configured to:
receiving the text content input by the user and sent by the user terminal through the application client;
correspondingly, the recommending module 13 is specifically configured to:
and sending the at least one expression to the user terminal through the application client for displaying.
The expression recommendation devices provided by the several embodiments are used for implementing the technical scheme of the data provider in any one of the foregoing method embodiments, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be noted that the division of the modules of the apparatus provided in the above embodiments is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a function of the processing module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Further, the application provides an electronic device, which can be either a user device or a server.
Fig. 6 is a block diagram of an electronic device for implementing an expression recommendation method according to an embodiment of the present application. Fig. 6 is a block diagram of an electronic device according to an expression recommendation method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 101, memory 102, and interfaces for connecting the various components, including high-speed and low-speed interfaces, as well as a communications interface 103 for communicating with other electronic devices. The various components are interconnected using different buses 104 and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 101 is taken as an example.
Memory 102 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the recommendation method for the expression corresponding to any execution subject provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the methods provided herein.
The memory 102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data, such as an emoticon database, and the like. Further, the memory 102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 102 may optionally include memory located remotely from the processor 1001, which may be connected to data processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Further, the electronic device may further include: an input device and an output device. The processor 101, the memory 102, the input device, and the output device may be connected by a bus or other means, and the bus connection is exemplified in fig. 6.
The input device may receive input numeric or character information and generate key signal inputs related to user settings and function control of the data processing electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Further, the present application also provides a non-transitory computer readable storage medium storing computer instructions, which are executed by a processor to implement the technical solution provided by any of the foregoing method embodiments.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. A method for recommending an expression, comprising:
acquiring text content input by a user;
analyzing and processing the text content to obtain the emotion and emotion score expressed by the text content;
when the emotion score is larger than a preset score threshold value, acquiring at least one expression corresponding to the emotion;
recommending the at least one expression to the user.
2. The method of claim 1, wherein the obtaining at least one expression corresponding to the emotion comprises:
generalizing the emotion to obtain at least one emotion word representing the emotion;
and performing expression retrieval by adopting the at least one emotion word to obtain the at least one expression.
3. The method of claim 2, wherein the at least one emotive word comprises: synonyms representing the mood, and hypernyms and hyponyms.
4. The method of claim 2, wherein the performing expression retrieval using the at least one emotion word obtains the at least one expression, including;
retrieving a retrieval result from an expression database according to the at least one emotion word, wherein the retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the retrieval result by adopting a semantic matching model, and acquiring the at least one expression with the score larger than a preset value in the retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in the emotion and the at least one emotion word.
5. The method of claim 2, wherein the performing expression retrieval using the at least one emotion word obtains the at least one expression, including;
according to the at least one emotion word, retrieving a first retrieval result from an expression database, wherein the first retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in an emotion and the at least one emotion word;
searching from an expression database by adopting a picture similarity technology according to the first expression to obtain a second search result, wherein the second search result comprises a plurality of expressions;
analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value;
and obtaining the at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
6. The method according to any one of claims 1 to 5, wherein the analyzing the text content to obtain the emotion expressed by the text content and an emotion score comprises:
inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion expressed by the text content and the emotion score;
the emotion classification model is a model which is obtained based on deep learning training and can be used for carrying out emotion classification on characters to obtain emotions and corresponding emotion scores.
7. The method according to any one of claims 1 to 5, wherein the acquiring text content input by a user comprises:
receiving the text content input by the user and sent by the user terminal through the application client;
correspondingly, the recommending the at least one expression to the user includes:
and sending the at least one expression to the user terminal through the application client for displaying.
8. An expression recommendation device, comprising:
the acquisition module is used for acquiring text content input by a user;
the processing module is used for analyzing and processing the text content to obtain the emotion and emotion score expressed by the text content;
the processing module is further used for acquiring at least one expression corresponding to the emotion when the emotion score is larger than a preset score threshold;
and the recommending module is used for recommending the at least one expression to the user.
9. The apparatus of claim 8, wherein the processing module is specifically configured to:
generalizing the emotion to obtain at least one emotion word representing the emotion;
and performing expression retrieval by adopting the at least one emotion word to obtain the at least one expression.
10. The apparatus of claim 9, wherein the at least one emotive word comprises: synonyms representing the mood, and hypernyms and hyponyms.
11. The apparatus of claim 9, wherein the processing module is specifically configured to;
retrieving a retrieval result from an expression database according to the at least one emotion word, wherein the retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the retrieval result by adopting a semantic matching model, and acquiring the at least one expression with the score larger than a preset value in the retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in the emotion and the at least one emotion word.
12. The apparatus of claim 9, wherein the processing module is specifically configured to;
according to the at least one emotion word, retrieving a first retrieval result from an expression database, wherein the first retrieval result comprises a plurality of expressions;
processing and scoring the expressions in the first retrieval result by adopting a semantic matching model, and acquiring at least one first expression with the score larger than a first preset value in the first retrieval result; wherein the semantic matching model is a model capable of scoring a degree of similarity between semantics expressed in an emotion and the at least one emotion word;
searching from an expression database by adopting a picture similarity technology according to the first expression to obtain a second search result, wherein the second search result comprises a plurality of expressions;
analyzing the expressions in the second retrieval result by adopting an emotion understanding technology to obtain the emotion represented by each expression, and obtaining at least one second expression of which the similarity between the emotion represented in the second retrieval result and the emotion represented by the text content is greater than a second preset value;
and obtaining the at least one expression recommended to the user according to the at least one first expression and the at least one second expression.
13. The apparatus according to any one of claims 8 to 12, wherein the processing module is further specifically configured to:
inputting the text content into an emotion classification model for emotion classification processing to obtain the emotion expressed by the text content and the emotion score;
the emotion classification model is a model which is obtained based on deep learning training and can be used for carrying out emotion classification on characters to obtain emotions and corresponding emotion scores.
14. The apparatus according to any one of claims 8 to 12, wherein the obtaining module is specifically configured to:
receiving the text content input by the user and sent by the user terminal through the application client;
correspondingly, the recommendation module is specifically configured to:
and sending the at least one expression to the user terminal through the application client for displaying.
15. An electronic device, comprising:
at least one processor, a memory, and a communication interface for communicating with a user terminal;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of recommending expressions of any one of claims 1 to 7.
16. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for recommending an expression according to any one of claims 1 to 7.
17. A method for processing data, comprising:
acquiring the emotion of a user expressed by text content based on the text content input by the user;
and recommending at least one expression to the user according to the emotion of the user.
CN202010067850.9A 2020-01-20 2020-01-20 Expression recommendation method, device, equipment and storage medium Active CN111291184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010067850.9A CN111291184B (en) 2020-01-20 2020-01-20 Expression recommendation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010067850.9A CN111291184B (en) 2020-01-20 2020-01-20 Expression recommendation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111291184A true CN111291184A (en) 2020-06-16
CN111291184B CN111291184B (en) 2023-07-18

Family

ID=71026543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010067850.9A Active CN111291184B (en) 2020-01-20 2020-01-20 Expression recommendation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111291184B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148133A (en) * 2020-09-10 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for determining recommended expression
CN113010784A (en) * 2021-03-17 2021-06-22 北京十一贝科技有限公司 Method, apparatus, electronic device, and medium for generating prediction information

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160042359A1 (en) * 2014-08-11 2016-02-11 24/7 Customer, Inc. Methods and apparatuses for modeling customer interaction experiences
US9547763B1 (en) * 2015-03-31 2017-01-17 EMC IP Holding Company LLC Authentication using facial recognition
US20170052946A1 (en) * 2014-06-06 2017-02-23 Siyu Gu Semantic understanding based emoji input method and device
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN108549681A (en) * 2018-04-03 2018-09-18 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment, computer readable storage medium
US20190114689A1 (en) * 2017-10-13 2019-04-18 Midea Group Co., Ltd. Method and system for providing personalized on-location information exchange
CN110297928A (en) * 2019-07-02 2019-10-01 百度在线网络技术(北京)有限公司 Recommended method, device, equipment and the storage medium of expression picture
CN110321477A (en) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 Information recommendation method, device, terminal and storage medium
US20190325201A1 (en) * 2018-04-19 2019-10-24 Microsoft Technology Licensing, Llc Automated emotion detection and keyboard service
CN110580899A (en) * 2019-10-12 2019-12-17 上海上湖信息技术有限公司 Voice recognition method and device, storage medium and computing equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052946A1 (en) * 2014-06-06 2017-02-23 Siyu Gu Semantic understanding based emoji input method and device
US20160042359A1 (en) * 2014-08-11 2016-02-11 24/7 Customer, Inc. Methods and apparatuses for modeling customer interaction experiences
US9547763B1 (en) * 2015-03-31 2017-01-17 EMC IP Holding Company LLC Authentication using facial recognition
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
US20190114689A1 (en) * 2017-10-13 2019-04-18 Midea Group Co., Ltd. Method and system for providing personalized on-location information exchange
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN108549681A (en) * 2018-04-03 2018-09-18 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment, computer readable storage medium
US20190325201A1 (en) * 2018-04-19 2019-10-24 Microsoft Technology Licensing, Llc Automated emotion detection and keyboard service
CN110321477A (en) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 Information recommendation method, device, terminal and storage medium
CN110297928A (en) * 2019-07-02 2019-10-01 百度在线网络技术(北京)有限公司 Recommended method, device, equipment and the storage medium of expression picture
CN110580899A (en) * 2019-10-12 2019-12-17 上海上湖信息技术有限公司 Voice recognition method and device, storage medium and computing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
琚春华等: "一种融入用户情绪因素的综合音乐推荐方法", 《情报学报》 *
琚春华等: "一种融入用户情绪因素的综合音乐推荐方法", 《情报学报》, no. 06, 24 June 2017 (2017-06-24) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148133A (en) * 2020-09-10 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for determining recommended expression
CN112148133B (en) * 2020-09-10 2024-01-23 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for determining recommended expression
CN113010784A (en) * 2021-03-17 2021-06-22 北京十一贝科技有限公司 Method, apparatus, electronic device, and medium for generating prediction information
CN113010784B (en) * 2021-03-17 2024-02-06 北京十一贝科技有限公司 Method, apparatus, electronic device and medium for generating prediction information

Also Published As

Publication number Publication date
CN111291184B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN111221984B (en) Multi-mode content processing method, device, equipment and storage medium
US11714816B2 (en) Information search method and apparatus, device and storage medium
CN112507715B (en) Method, device, equipment and storage medium for determining association relation between entities
US11200269B2 (en) Method and system for highlighting answer phrases
CN111625635A (en) Question-answer processing method, language model training method, device, equipment and storage medium
EP3876114A2 (en) Method for recommending a search term, method for training a target model, apparatus for recommending a search term, apparatus for training a target model, electronic device and program product
CN111967262A (en) Method and device for determining entity tag
CN112115299A (en) Video searching method and device, recommendation method, electronic device and storage medium
US20210200813A1 (en) Human-machine interaction method, electronic device, and storage medium
CN111104514A (en) Method and device for training document label model
CN111949814A (en) Searching method, searching device, electronic equipment and storage medium
CN111651578B (en) Man-machine conversation method, device and equipment
CN111737559A (en) Resource sorting method, method for training sorting model and corresponding device
CN112507091A (en) Method, device, equipment and storage medium for retrieving information
CN112487242A (en) Method and device for identifying video, electronic equipment and readable storage medium
CN111563198B (en) Material recall method, device, equipment and storage medium
CN111310058B (en) Information theme recommendation method, device, terminal and storage medium
CN111177462B (en) Video distribution timeliness determination method and device
CN111291184B (en) Expression recommendation method, device, equipment and storage medium
CN111967599A (en) Method and device for training model, electronic equipment and readable storage medium
CN111984775A (en) Question and answer quality determination method, device, equipment and storage medium
CN113495942A (en) Method and device for pushing information
CN111523019B (en) Method, apparatus, device and storage medium for outputting information
CN112650919A (en) Entity information analysis method, apparatus, device and storage medium
CN111666417A (en) Method and device for generating synonyms, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant