CN111190493A - Expression input method, device, equipment and storage medium - Google Patents

Expression input method, device, equipment and storage medium Download PDF

Info

Publication number
CN111190493A
CN111190493A CN201811356978.6A CN201811356978A CN111190493A CN 111190493 A CN111190493 A CN 111190493A CN 201811356978 A CN201811356978 A CN 201811356978A CN 111190493 A CN111190493 A CN 111190493A
Authority
CN
China
Prior art keywords
expression
user
information
session
session context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811356978.6A
Other languages
Chinese (zh)
Inventor
陈秋益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201811356978.6A priority Critical patent/CN111190493A/en
Priority to PCT/CN2019/117846 priority patent/WO2020098669A1/en
Publication of CN111190493A publication Critical patent/CN111190493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for inputting expressions, which relate to the technical field of mobile terminals, and the method comprises the following steps: during a user session, acquiring session context information during the user session; according to the session context information, determining a recommended expression matched with the session context information from an expression inference model; and displaying the recommended expression so that the user selects the recommended expression and then sends the recommended expression to the user session.

Description

Expression input method, device, equipment and storage medium
Technical Field
The present invention relates to the field of mobile terminal technologies, and in particular, to a method, an apparatus, a device, and a storage medium for inputting an expression.
Background
At the present stage, due to the continuous development of communication technology and the massive use of intelligent terminals, instant communication software has become a basic communication tool for people to communicate daily, and the functions and interaction modes of the instant communication software are also continuously and abundantly improved. The communication interaction mode comprises pictures, voice, video and the like besides characters, wherein a special communication expression mode is called expression, and the expression is that the pictures are used for replacing language characters and voice when people communicate through the instant messaging software, so that the pleasure of users in the communication process can be increased, and the expression effect can be enhanced.
In the prior art, when a user sends an expression through instant communication software, the user needs to manually search for a proper expression in a specific menu, and the prior art has the disadvantages of complicated operation and time consumption and influences on user experience. Therefore, the transmission method of the existing expression needs to be improved.
Disclosure of Invention
The technical problem solved by the scheme provided by the embodiment of the invention is that the proper expression cannot be simply and effectively searched and input during the user session.
The method for inputting the expression provided by the embodiment of the invention is applied to a terminal and comprises the following steps:
during a user session, acquiring session context information during the user session;
according to the session context information, determining a recommended expression matched with the session context information from an expression inference model;
and displaying the recommended expression so that the user selects the recommended expression and then sends the recommended expression to the user session.
The device for inputting the expression provided by the embodiment of the invention comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring session context information during a user session;
the presumption module is used for determining a recommended expression matched with the conversation scene information from an expression presumption model according to the conversation scene information;
and the display module is used for displaying the recommended expression so that the user can select the recommended expression and then send the recommended expression to the user session.
According to the device for inputting the expression provided by the embodiment of the invention, the device comprises: a processor, and a memory coupled to the processor; the memory stores a program of expression input which can run on the processor, and the program of expression input realizes the steps of the method of expression input provided by the embodiment of the invention when being executed by the processor.
According to the computer storage medium provided by the embodiment of the invention, the storage medium stores a program for inputting an expression, and the program for inputting an expression is executed by a processor to realize the steps of the method for inputting an expression provided by the embodiment of the invention.
According to the scheme provided by the embodiment of the invention, the expression can be automatically recommended to the user from the conjecture model according to the session context information, the environment information and the physiological information during the session for the user to select, so that the complicated expression searching step of the user is omitted, and the conjecture model can be continuously improved and optimized through the self-learning model, so that the expression can be better recommended to the user, and the user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flowchart of a method for inputting emotions according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an apparatus for inputting emotions according to an embodiment of the present invention;
FIG. 3 is a block diagram of a system for emotive input according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for recommending expressions according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for learning a model according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be understood that the preferred embodiments described below are only for the purpose of illustrating and explaining the present invention, and are not to be construed as limiting the present invention.
Fig. 1 is a flowchart of an expression input method according to an embodiment of the present invention, as shown in fig. 1, applied to a terminal, including:
step S101: during a user session, acquiring session context information during the user session;
step S102: according to the session context information, determining a recommended expression matched with the session context information from an expression inference model;
step S103: and displaying the recommended expression so that the user selects the recommended expression and then sends the recommended expression to the user session.
In one embodiment, the display mode for displaying the recommended expression includes any one of: the confirmed recommended emotion may be displayed in a pop-up window, or the confirmed recommended emotion may be displayed in a dialog box, or the like. And when the plurality of recommended expressions are multiple, the recommended expressions are sequentially arranged and displayed from high to low according to the matching degree matched with the session context information.
In one embodiment, the session context information includes at least any one of: user session context information, current environment information, user physiological information and the like; the session context information refers to any or partial or all of user session context information, current environment information, user physiological information and the like in the process of chatting between the user and friends or in the communication social group.
The user session context information may include one or more of the following information: the method comprises the steps of obtaining session text information, session scene information, session object information, sending expression information and the like; the session text information refers to chat information of the user with friends or in a communication social group, such as discussing birthday party or travel arrangement matters and the like; the session scene information refers to the scene of the session, such as group chat with a friend or a plurality of friends; the session object information refers to whether the opposite party of the session is a session object or a plurality of session objects and the degree of close friends between the session objects, such as friend relation or non-friend relation, etc.; sending the expression information refers to the expression record sent between the conversation object, and the expression record at least comprises the following fields: expression ID, transmission object and scene, and transmission time.
The current environment information may include one or more of the following: geographic location, date, weather, temperature, application use of the terminal, and the like; the current geographic position, date, weather, temperature and the like can be acquired by other applications of the terminal or connected external devices, and the application use condition of the terminal, such as game, video, music, shopping and the like, is obtained by detecting the starting application of the terminal.
The user physiological information may include one or more of: user facial expression, user heartbeat, user blood pressure, user temperature, and the like; user facial expression can be gathered through the terminal camera, user's heartbeat, user's blood pressure and user's temperature etc. can be detected through connecting optional outside wearable equipment.
In an embodiment, the determining, according to the session context information, a recommended emotion matching the session context information from an emotion inference model includes: respectively calculating a user session context information characteristic vector S, a current environment information characteristic vector H and a user physiological information characteristic vector L in the session context information; calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H and the user physiological information feature vector L; and determining a recommended expression matched with the expression recommendation vector T from an expression inference model according to the expression recommendation vector T.
In an embodiment, the calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H, and the user physiological information feature vector L includes: t ═ S × W1+ H × W2+ L × W3; wherein W1 is the weight of the user session context information feature vector S; the W2 is the weight of the current environment information feature vector H; the W3 is the weight of the user physiological information feature vector L; and W1+ W2+ W3 is 1.
In one embodiment, an expression recommendation model containing session context information and expressions selected by a user is established by continuously recording session context information of a plurality of users and the determined expressions under the session context, and the expression recommendation model comprises a session context information table and a recommended expression table. Wherein, the session context information table includes: user session context information, current environment information, and user physiological information; the recommended expression table comprises a plurality of expression packages, and the expression packages comprise dynamic expression packages and static expression packages; the expression package can be created by itself or can be acquired from the network side.
In an embodiment, the method may further comprise: an operation step of updating the expression inference model specifically includes: storing the session context information and the recommended expressions or the non-recommended expressions selected by the user after the recommended expressions are determined into a historical database; when the situation that the expression presumption model needs to be updated is detected, the historical database is used for training the expression presumption model, and a new expression presumption model is generated; and storing the new expression presumption model, and deleting the expression presumption model.
Fig. 2 is a schematic diagram of an expression input device according to an embodiment of the present invention, as shown in fig. 2, including: an acquisition module 201, a presumption module 202, and a display module 203.
The obtaining module 201 is configured to obtain session context information during a user session; the inference module 202 is configured to determine, according to the session context information, a recommended expression matched with the session context information from an expression inference model; the display module 203 is configured to display the recommended emotion so that the user selects the recommended emotion and then sends the selected recommended emotion to the user session.
In one embodiment, the session context information includes at least any one of: user session context information, current environment information, user physiological information and the like; the user session context information includes at least any one of: the method comprises the steps of obtaining session text information, session scene information, session object information, sending expression information and the like; the current environment information includes at least any one of: geographic location, date, weather, temperature, application use of the terminal, and the like; the user physiological information includes at least any one of: user facial expression, user heartbeat, user blood pressure, and user temperature, among others. In one embodiment, the speculation module 203 comprises: the first calculation unit is used for calculating a user session context information characteristic vector S, a current environment information characteristic vector H and a user physiological information characteristic vector L in the session context information respectively; the second calculation unit is used for calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H and the user physiological information feature vector L; and the presumption unit is used for determining the recommended expression matched with the expression recommendation vector T from an expression presumption model according to the expression recommendation vector T.
According to the device for inputting the expression provided by the embodiment of the invention, the device comprises: a processor, and a memory coupled to the processor; the memory stores a program of expression input which can run on the processor, and the program of expression input realizes the steps of the method of expression input provided by the embodiment of the invention when being executed by the processor.
According to the computer storage medium provided by the embodiment of the invention, the storage medium stores a program for inputting an expression, and the program for inputting an expression is executed by a processor to realize the steps of the method for inputting an expression provided by the embodiment of the invention.
The embodiment of the invention depends on intelligent terminal equipment and instant messaging software, and the intelligent terminal equipment adopts a battery to supply power. The intelligent terminal can support electric quantity detection, information storage, geographic position detection, a communication module, a photographing module and connectable external wearable equipment (optional).
Embodiments of the invention are based on the following aspects: 1. user session context information (text information, contact information, session group information of where, etc.) 2. current user physiological information (facial expressions, heart beats, blood pressure, etc.) 3. current environment information (geographical location, weather, date, temperature, application usage record, etc.) 4. user history sends expression records. The diversification of the input information is beneficial to more accurately recommending the expression to the user. The embodiment of the invention also comprises a self-learning function, so that expression records (including the current and all historical data) sent by the user can be synthesized, the model is continuously improved and optimized, and individual requirements of the user can be matched more accurately.
Fig. 3 is a block diagram of a system structure of emotion input according to an embodiment of the present invention, as shown in fig. 3, including session context detection, environmental information detection, physiological information detection, inference model, learning model, and historical data record.
Firstly, detecting and recording the session context of instant messaging software: obtaining user in instant messaging applications
1. Conversation information: the chat information refers to the chat information between the user and friends or in a communication social group;
2. session scene and session object: if the scene of each session contains group member composition and the relationship between each member and the user (whether the relationship is a friend relationship, the frequency of individual session with the user and the like) in a certain group, the relationship value of the user and a group friend n is assumed to be Gn, the value of Gn is 0 to 1, and if the relationship with n is not a friend relationship, the value of Gn is 0; if the friend relationship exists, the value of Gn depends on the frequency of individual chatting and the frequency of interaction of a friend circle, and the more frequent value is closer to 1;
3. and (3) sending and recording the expression: refers to a record that a user sends an emoticon in any conversation, the record containing at least the following fields: expression ID, transmission object and scene, and transmission time.
A specific implementation process is taken as an example to explain how to adopt the session context information to perform expression recommendation: in a friend group a, chat information of users and other friends in the group a is analyzed by a model, so that an emotional tendency feature vector S1 of the user, an overall emotional atmosphere feature vector S2 of the group, a relationship vector S3 (a normalized vector of G, which may be an average of total relationship vectors) of the user and each member in the group can be estimated, and an emotional tendency normalized vector S4 of the user sending an expression record in the group. From the above four feature vectors, an expression recommendation session context information feature vector S, S1 Ws1+ S2 Ws2+ S3 Ws3+ S4 Ws4 may be calculated, where Wsn is the weight of each feature vector. The vector of S will eventually be combined with the following other features to calculate the expression recommendation vector T.
Secondly, detecting environmental information: acquiring a geographic position H1, a date (containing holiday information) H2, weather H3, temperature H3 (optional), terminal application use conditions (whether applications such as games H4, videos H5, music H6 and shopping H7 are used), and the like; from the above feature vectors, expression recommendation environment information feature vectors H, H1 Wh1+ H2 Wh2+ H3 Wh3+ H4 Wh4+ … … Hn Whn, where Whn is the weight of each feature vector, may be calculated. The vector of H will eventually be combined with the following other features to calculate the expression recommendation vector T.
Thirdly, detecting physiological information: the current facial expression L1 of the user is acquired through the camera, and the heartbeat L2, the blood pressure L3, the temperature L4 and the like of the user can be detected through connecting optional external wearable equipment; from the above feature vectors, expression recommendation physiological information feature vectors L, L-L1 × Wl1+ L2 × Wl2+ L3 × Wl3+ L4 × Wl4+ … … Ln Wln can be calculated, where Wln is the weight of each feature vector. The vector of L is finally combined with other features to calculate an expression recommendation vector T.
Fourthly, model speculation: and calculating an expression recommendation vector T through internal processing according to information provided by session context detection, environment information detection and physiological information detection, associating a plurality of possible expressions which need to be sent by the user with the vector T, and sequencing according to priority. The computational expression of T is as follows:
T=S*W1+H*W2+L*W3
wherein, W1 is the weight of the expression recommendation session context information feature vector; w2 is the weight of the expression recommendation environment information feature vector; w3 is the weight of the expression recommendation environment information feature vector, and W1+ W2+ W3 is 1.
Fifthly, recommending the expressions: presenting the recommended expressions to the user according to the priority for selection;
sixthly, recording historical data: recording data of session context detection, environmental information detection and physiological information detection for the learning model to use;
seventhly, learning the model: training the model according to the historical data record at intervals of a time period to learn the options of the user for sending the conditions under a plurality of data conditions, and generating a new conjecture model after training to replace the original conjecture model. The learning model is mainly used for learning each weight value W, when a user uses the expression recommendation function for the first time, the recommendation model adopts a general weight value set to carry out initialization calculation, if the recommended expression user does not adopt the expression recommendation function, the user sends a non-recommended expression, the learning model learns the behavior of the user, the learning of the model is carried out again according to the behavior of the user after a certain period of time is accumulated, a new model is updated by the W value to replace the old recommendation model, the behavior close to the user is gradually optimized in continuous learning and updating, and the recommendation accuracy is improved
Fig. 4 is a flowchart of an expression recommendation method according to an embodiment of the present invention, and as shown in fig. 4, the expression recommendation method includes:
s401: and acquiring context information, environmental information and physiological information of the instant communication software session through the capability provided by the device.
And S402, the presumption model presumes the expression required to be sent by the user according to the information.
S403: and presenting the recommended expression by the expression recommendation interface for the user to select.
Fig. 5 is a flowchart of a method for learning a model according to an embodiment of the present invention, as shown in fig. 5, including:
s501: and recording session context information, environment information and physiological information.
S502: and training the learning model according to the historical data record to generate a new guess model.
S503: the new inference model replaces the original inference model.
According to the scheme provided by the embodiment of the invention, the expression can be automatically recommended to the user from the conjecture model according to the session context information, the environment information and the physiological information during the session for the user to select, so that the complicated expression searching step of the user is omitted, and the conjecture model can be continuously improved and optimized through the self-learning model, so that the expression can be better recommended to the user, and the user experience is improved.
Although the present invention has been described in detail hereinabove, the present invention is not limited thereto, and various modifications can be made by those skilled in the art in light of the principle of the present invention. Thus, modifications made in accordance with the principles of the present invention should be understood to fall within the scope of the present invention.

Claims (10)

1. The method for inputting the expression is applied to a terminal and comprises the following steps:
during a user session, acquiring session context information during the user session;
according to the session context information, determining a recommended expression matched with the session context information from an expression inference model;
and displaying the recommended expression so that the user selects the recommended expression and then sends the recommended expression to the user session.
2. The method according to claim 1, wherein the session context information comprises at least any one of: user session context information, current environment information, and user physiological information;
wherein the user session context information includes at least any one of: the method comprises the steps of obtaining session text information, session scene information, session object information and sending expression information; the current environment information includes at least any one of: geographic location, date, weather, temperature, and application usage of the terminal; the user physiological information includes at least any one of: user facial expression, user heartbeat, user blood pressure, and user temperature.
3. The method of claim 1, wherein determining, from the session context information, a recommended emotion matching the session context information from an emotion inference model comprises:
respectively calculating a user session context information characteristic vector S, a current environment information characteristic vector H and a user physiological information characteristic vector L in the session context information;
calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H and the user physiological information feature vector L;
and determining a recommended expression matched with the expression recommendation vector T from an expression inference model according to the expression recommendation vector T.
4. The method of claim 3, wherein the calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H and the user physiological information feature vector L comprises:
T=S*W1+H*W2+L*W3;
wherein W1 is the weight of the user session context information feature vector S; the W2 is the weight of the current environment information feature vector H; the W3 is the weight of the user physiological information feature vector L; and W1+ W2+ W3 is 1.
5. The method according to any one of claims 1 to 4, further comprising an operation step of updating the expression inference model, which specifically includes:
storing the session context information and the recommended expressions or the non-recommended expressions selected by the user after the recommended expressions are determined into a historical database;
when the situation that the expression presumption model needs to be updated is detected, the historical database is used for training the expression presumption model, and a new expression presumption model is generated;
and storing the new expression presumption model, and deleting the expression presumption model.
6. An apparatus for expression input, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring session context information during a user session;
the presumption module is used for determining a recommended expression matched with the conversation scene information from an expression presumption model according to the conversation scene information;
and the display module is used for displaying the recommended expression so that the user can select the recommended expression and then send the recommended expression to the user session.
7. The apparatus of claim 6, wherein the session context information comprises at least any one of: user session context information, current environment information, and user physiological information;
wherein the user session context information includes at least any one of: the method comprises the steps of obtaining session text information, session scene information, session object information and sending expression information; the current environment information includes at least any one of: geographic location, date, weather, temperature, and application usage of the terminal; the user physiological information includes at least any one of: user facial expression, user heartbeat, user blood pressure, and user temperature.
8. The apparatus of claim 6, wherein the inference module comprises:
the first calculation unit is used for calculating a user session context information characteristic vector S, a current environment information characteristic vector H and a user physiological information characteristic vector L in the session context information respectively;
the second calculation unit is used for calculating an expression recommendation vector T according to the user session context information feature vector S, the current environment information feature vector H and the user physiological information feature vector L;
and the presumption unit is used for determining the recommended expression matched with the expression recommendation vector T from an expression presumption model according to the expression recommendation vector T.
9. An emoticon input apparatus, comprising: a processor, and a memory coupled to the processor; the memory has stored thereon a program of emotive input executable on the processor, the program of emotive input implementing the steps of the method of emotive input according to any one of claims 1 to 5 when executed by the processor.
10. A computer storage medium characterized in that the storage medium stores a program of emotive input, which when executed by a processor, implements the steps of the method of emotive input according to any one of claims 1 to 5.
CN201811356978.6A 2018-11-15 2018-11-15 Expression input method, device, equipment and storage medium Pending CN111190493A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811356978.6A CN111190493A (en) 2018-11-15 2018-11-15 Expression input method, device, equipment and storage medium
PCT/CN2019/117846 WO2020098669A1 (en) 2018-11-15 2019-11-13 Expression input method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811356978.6A CN111190493A (en) 2018-11-15 2018-11-15 Expression input method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111190493A true CN111190493A (en) 2020-05-22

Family

ID=70707043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811356978.6A Pending CN111190493A (en) 2018-11-15 2018-11-15 Expression input method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111190493A (en)
WO (1) WO2020098669A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511713A (en) * 2022-04-20 2022-05-17 威海经济技术开发区天智创新技术研究院 Image-based prediction method and device and server

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726843B (en) * 2020-05-29 2023-11-03 新华三技术有限公司成都分公司 Method for establishing session, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050234727A1 (en) * 2001-07-03 2005-10-20 Leo Chiu Method and apparatus for adapting a voice extensible markup language-enabled voice system for natural speech recognition and system response
WO2016197767A2 (en) * 2016-02-16 2016-12-15 中兴通讯股份有限公司 Method and device for inputting expression, terminal, and computer readable storage medium
CN106293120A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 Expression input method and mobile terminal
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN107145270A (en) * 2017-04-25 2017-09-08 北京小米移动软件有限公司 Emotion icons sort method and device
CN107634901A (en) * 2017-09-19 2018-01-26 广东小天才科技有限公司 Method for pushing, pusher and the terminal device of session expression
CN107729320A (en) * 2017-10-19 2018-02-23 西北大学 A kind of emoticon based on Time-Series analysis user conversation emotion trend recommends method
CN107784114A (en) * 2017-11-09 2018-03-09 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050234727A1 (en) * 2001-07-03 2005-10-20 Leo Chiu Method and apparatus for adapting a voice extensible markup language-enabled voice system for natural speech recognition and system response
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
WO2016197767A2 (en) * 2016-02-16 2016-12-15 中兴通讯股份有限公司 Method and device for inputting expression, terminal, and computer readable storage medium
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN106293120A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 Expression input method and mobile terminal
CN107145270A (en) * 2017-04-25 2017-09-08 北京小米移动软件有限公司 Emotion icons sort method and device
CN107634901A (en) * 2017-09-19 2018-01-26 广东小天才科技有限公司 Method for pushing, pusher and the terminal device of session expression
CN107729320A (en) * 2017-10-19 2018-02-23 西北大学 A kind of emoticon based on Time-Series analysis user conversation emotion trend recommends method
CN107784114A (en) * 2017-11-09 2018-03-09 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511713A (en) * 2022-04-20 2022-05-17 威海经济技术开发区天智创新技术研究院 Image-based prediction method and device and server

Also Published As

Publication number Publication date
WO2020098669A1 (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN109684510B (en) Video sequencing method and device, electronic equipment and storage medium
CN110869969B (en) Virtual assistant for generating personalized responses within a communication session
KR102050334B1 (en) Automatic suggestion responses to images received in messages, using the language model
CN108701142A (en) Information processing system, client terminal, information processing method and recording medium
US20160019280A1 (en) Identifying question answerers in a question asking system
CN109521927B (en) Robot interaction method and equipment
JP2015534148A (en) A machine learning model for hierarchically based arrays
CN111512617B (en) Device and method for recommending contact information
US8352389B1 (en) Multiple output relaxation machine learning model
CN111914179B (en) Semantic-based fuzzy search method and device, storage medium and electronic equipment
US11521111B2 (en) Device and method for recommending contact information
US10902209B2 (en) Method for content search and electronic device therefor
US20200159777A1 (en) Simplifying electronic communication based on dynamically structured contact entries
CN112528004A (en) Voice interaction method, voice interaction device, electronic equipment, medium and computer program product
CN112527115A (en) User image generation method, related device and computer program product
US20200005784A1 (en) Electronic device and operating method thereof for outputting response to user input, by using application
CN111190493A (en) Expression input method, device, equipment and storage medium
CN108885637A (en) Personage is mild-natured related
CN110532356A (en) Information processing method, device and storage medium
CN111400616A (en) Account recommendation method and device
WO2023196456A1 (en) Adaptive wellness collaborative media system
CN112395490B (en) Method and device for generating information
CN113901832A (en) Man-machine conversation method, device, storage medium and electronic equipment
US20240095491A1 (en) Method and system for personalized multimodal response generation through virtual agents
CN116433328A (en) Commodity recommendation processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200522