CN108255316B - Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium - Google Patents

Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN108255316B
CN108255316B CN201810064237.4A CN201810064237A CN108255316B CN 108255316 B CN108255316 B CN 108255316B CN 201810064237 A CN201810064237 A CN 201810064237A CN 108255316 B CN108255316 B CN 108255316B
Authority
CN
China
Prior art keywords
emoticons
conversation
context
session
description information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810064237.4A
Other languages
Chinese (zh)
Other versions
CN108255316A (en
Inventor
张烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810064237.4A priority Critical patent/CN108255316B/en
Publication of CN108255316A publication Critical patent/CN108255316A/en
Application granted granted Critical
Publication of CN108255316B publication Critical patent/CN108255316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method, an electronic device and a computer-readable storage medium for dynamically adjusting emoticons, wherein the method comprises the following steps: analyzing the description information of the current conversation in the instant communication client to determine the context of the current conversation; when the emoticon input operation is triggered, selecting a target emoticon corresponding to the context of the current conversation from multiple groups of alternative emoticons; and outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons. The method for dynamically adjusting the emoticons, the electronic device and the computer readable storage medium can simplify the operation of emoticon input, improve the convenience of input and enable the emoticon adjustment to be more intelligent.

Description

Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method for dynamically adjusting an emoticon, an electronic device, and a computer-readable storage medium.
Background
With the wide popularization of smart mobile terminals such as smart phones and tablet computers, various instant messaging application software is widely applied to various smart mobile terminals. During the use of the instant messaging application, the user often sends some emoticons to express the interest at that time. However, because the emoticons in the emoticon library are usually large in number, and the current ordering of the emoticons is generally performed according to time, the arrangement of the emoticons is relatively dispersed, a user often needs to turn pages for multiple times to find a proper emoticon, and the method is complex to operate and low in intelligence degree.
Disclosure of Invention
The embodiment of the application provides a method for dynamically adjusting an emoticon, an electronic device and a computer-readable storage medium, which are used for simplifying the operation of emoticon input, improving the convenience of input and enabling the emoticon adjustment to be more intelligent.
A first aspect of an embodiment of the present application provides a method for dynamically adjusting an emoticon, where the method is applied to an electronic device, and the method includes: analyzing the description information of the current conversation in the instant communication client to determine the context of the current conversation; when the emoticon input operation is triggered, selecting a target emoticon corresponding to the context of the current conversation from multiple groups of alternative emoticons; and outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
A second aspect of the embodiments of the present application provides an electronic device, including: the context module is used for analyzing the description information of the current conversation in the instant communication client and determining the context of the current conversation; the obtaining module is used for selecting a target expression set corresponding to the context of the current conversation from multiple groups of alternative expression sets when the expression symbol input operation is triggered; and the output module is used for outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
A third aspect of the embodiments of the present application provides an electronic apparatus, including: the method for dynamically adjusting the emoticons is characterized in that the method for dynamically adjusting the emoticons is implemented when the processor executes the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for dynamically adjusting an emoticon provided in the first aspect of the embodiments of the present application.
In the embodiments, in the process of a user performing a conversation through an instant messaging client, the description information of the conversation is acquired and analyzed, the context of the current conversation is determined according to the analysis result, and then the appropriate emoticon is matched according to the context and is output in preference to other emoticons, so that the output emoticon better meets the actual requirements of the user, the number of times of searching the emoticon by turning a page of the user can be reduced, the operation of inputting the emoticon is simplified, the convenience of inputting is improved, and the adjustment of the emoticon is more intelligent.
Drawings
Fig. 1 is an application environment diagram of a method for dynamically adjusting an emoticon according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a method for dynamically adjusting an emoticon according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a method for dynamically adjusting an emoticon according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to another embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Please refer to fig. 1, which is an application environment diagram of a method for dynamically adjusting emoticons according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 can perform data interaction with the electronic device 200 through a wireless or wired network. When the user of the electronic device 100 is in an instant messaging session with the user of the electronic device 200, the order of the alternative emoticons can be automatically adjusted and output when the user triggers an emoticon input operation through the method for dynamically adjusting the emoticons provided in the embodiments described later.
Please refer to fig. 2, which is a schematic flow chart illustrating an implementation of a method for dynamically adjusting an emoticon according to an embodiment of the present application, which is applicable to the electronic device in fig. 1, where the electronic device supports an instant messaging session, and the method includes: the mobile intelligent mobile terminal comprises an intelligent mobile terminal and other non-mobile data processing electronic terminals, wherein the intelligent mobile terminal can movably perform data processing, such as a smart phone, a tablet computer and a portable computer. As shown in fig. 2, the method mainly includes the following steps:
201. analyzing the description information of the current conversation in the instant communication client to determine the context of the current conversation;
the electronic terminal runs an instant messaging client program, such as WeChat, QQ and the like. When a user of an electronic terminal is in an instant messaging session with an opposite-end user, description information of the current session is acquired and analyzed through Artificial Intelligence (AI) to determine the context of the current session. The description information of the session may include, but is not limited to: content information of the session and auxiliary description information of the session. The auxiliary description information of the session includes, for example: the identity of the parties to the session, the time the session occurred, etc. Context refers to a language context, such as: more serious work sessions between the top and bottom levels, conversations between close friends, conversations between parents, and the like.
202. When the emoticon input operation is triggered, selecting a target emoticon corresponding to the context of the current conversation from multiple groups of alternative emoticons;
triggering the emoticon input operation means that the triggered upcoming operation is emoticon input, and the triggering conditions are as follows: the information input box is clicked or pressed, the emoticon input shortcut icon is clicked or pressed, a cursor or a finger of a user is moved to a display area of the information input box or the emoticon input shortcut icon, and the emoticon input shortcut key is pressed, etc.
The electronic device is preset with corresponding relations between a plurality of groups of alternative expression sets and different contexts. An event monitor is also preset in the electronic device, and is used for monitoring various events occurring in the interactive interface of the instant messaging client, such as: button click events, cursor movement events, and the like. When an event for triggering the emoticon input operation is monitored through the event listener, a target emoticon corresponding to the context of the current conversation is acquired from a plurality of preset alternative emoticons. The emoticon includes at least one emoticon.
Optionally, the correspondence may also be set in a server, and the electronic device obtains, through the server, a target expression set corresponding to the context of the current session.
203. And outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
Specifically, the output priority of the emoticons in the target expression set is increased. When the emoticons are output, the emoticons in the expression library are output according to the high-low order of the output priority, so that the emoticons in the target expression set are output in priority to the emoticons in other non-target expression sets.
Optionally, the description information of the current session is obtained in real time or at preset time intervals during the session, and is analyzed, and then the context of the current session is dynamically updated according to the analysis result. Therefore, the output of the emoticons is more in line with the development and change of the conversation context, and the output result is more intelligent.
According to the method for dynamically adjusting the emoticons, the description information of the session is acquired and analyzed in the process that the user carries out the session through the instant messaging client, the context of the current session is determined according to the analysis result, then the appropriate emoticons are matched according to the context and output in preference to other emoticons, so that the output emoticons are more in line with the actual requirements of the user, the times of searching the emoticons by turning pages of the user can be reduced, the operation of inputting the emoticons is simplified, the convenience of inputting is improved, and the emoticons are adjusted more intelligently.
Please refer to fig. 3, which is a schematic flow chart illustrating an implementation of a method for dynamically adjusting emoticons according to another embodiment of the present application, which is applicable to the electronic device in fig. 1, wherein the electronic device supports an instant messaging session, and the method includes: the mobile intelligent mobile terminal comprises an intelligent mobile terminal and other non-mobile data processing electronic terminals, wherein the intelligent mobile terminal can movably perform data processing, such as a smart phone, a tablet computer and a portable computer. As shown in fig. 3, the method mainly includes the following steps:
301. analyzing the description information of the historical conversation in the instant messaging client to obtain a plurality of different alternative contexts;
the electronic terminal runs an instant messaging client program, such as WeChat, QQ and the like. Periodically or whenever a new session is over, the description information for the past session (or chat) is obtained and analyzed by the AI to derive a plurality of different alternative contexts. The description information of the session may include, but is not limited to: content information of the session and auxiliary description information of the session. The auxiliary description information of the session includes, for example: the identity of the parties to the session, the time the session occurred, etc. The identity of the party to the session may be determined from the tags of the session objects, such as: friends, dad, boss, XXX manager, and so on. Context refers to a language context, such as: more serious work sessions between the top and bottom levels, conversations between close friends, conversations between parents, and the like.
Optionally, the description information of each historical conversation is analyzed to obtain the scene characteristics, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation of each historical conversation. And then, with the conversation as a unit, classifying the obtained scene characteristics, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation to obtain a plurality of different alternative contexts. In practical application, the description information can be input into a preset learning model, the keywords of the description information are extracted through the learning model and analyzed, so that the scene characteristics of each historical conversation, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation are obtained, and the obtained characteristics are classified according to the similarity, so that a plurality of alternative contexts, the scene characteristics of the plurality of alternative contexts, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation are obtained. Wherein the scene, for example: work, family gathering, appointments, and the like. Mood, for example: whether the two parties to the conversation are happy, angry, or frustrated, etc. The atmosphere of the conversation, for example: serious, intense in discussion (both parties input information very often), ambiguous, etc.
302. Responding to the setting operation of a user, establishing a corresponding relation between the expression in the expression library and the context pointed by the setting operation in the multiple candidate contexts, and obtaining multiple groups of candidate expression sets;
specifically, the description information of the multiple candidate contexts is output on a preset setting interface, and in response to the setting operation of the user on the setting interface, the expression in the expression library and the context pointed by the setting operation in the multiple candidate contexts are related to each other, so that multiple groups of candidate expression sets corresponding to the multiple candidate contexts are obtained.
Optionally, in another embodiment, after the description information of the historical conversation is analyzed to obtain a plurality of different candidate contexts, the emoticons appearing in each candidate context can be extracted respectively. Then, a candidate expression set corresponding to each candidate context is generated, and the candidate expression sets contain the expression symbols appearing in the corresponding candidate contexts.
And further, after the alternative expression sets corresponding to the alternative contexts are generated, the emoticons in the alternative expression sets are sorted according to the matching degree between the emoticons and the corresponding alternative contexts, and the higher the matching degree is, the higher the sorting is. In a practical application, the matching degree can be represented by the occurrence frequency, and the higher the occurrence frequency in the same scene is, the higher the matching degree is.
303. Analyzing the description information of the current conversation in the instant communication client to determine the context of the current conversation;
specifically, the description information of the current conversation is analyzed to obtain scene features of the current conversation, mood features of conversation persons and atmosphere features of the conversation, the obtained scene features of the current conversation, mood features of the conversation persons and atmosphere features of the conversation are matched with preset scene features of different alternative contexts, mood features of the conversation persons and atmosphere features of the conversation, and the context with the highest matching degree is determined as the context of the current conversation.
304. When the emoticon input operation is triggered, selecting a target emoticon corresponding to the context of the current conversation from multiple groups of alternative emoticons;
triggering the emoticon input operation means that the triggered upcoming operation is emoticon input, and the triggering conditions are as follows: the information input box is clicked or pressed, the emoticon input shortcut icon is clicked or pressed, a cursor or a finger of a user is moved to a display area of the information input box or the emoticon input shortcut icon, and the emoticon input shortcut key is pressed, etc.
The electronic device is preset with corresponding relations between a plurality of expression sets and different contexts. An event monitor is also preset in the electronic device, and is used for monitoring various events occurring in the interactive interface of the instant messaging client, such as: button click events, cursor movement events, and the like. And when an event for triggering the emoticon input operation is monitored through the event monitor, acquiring a target emoticon corresponding to the context of the current conversation from a plurality of preset alternative emoticons according to the corresponding relation between the plurality of alternative contexts and the plurality of groups of alternative emoticons. The emoticon includes at least one emoticon.
Optionally, the correspondence may also be set in a server, and the electronic device obtains, through the server, a target expression set corresponding to the context of the current session.
305. And outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
Specifically, the output priority of the emoticons in the target expression set is increased. When the emoticons are output, the emoticons in the expression library are output according to the high-low order of the output priority, so that the emoticons in the target expression set are output in priority to the emoticons in other non-target expression sets.
Optionally, the description information of the current session is obtained in real time or at preset time intervals during the session, and is analyzed, and then the context of the current session is dynamically updated according to the analysis result. Therefore, the output of the emoticons is more in line with the development and change of the conversation context, and the output result is more intelligent.
It can be understood that, after the alternative expression sets corresponding to the alternative contexts are generated, the emoticons in the alternative expression sets are sorted according to the matching degree between the emoticons and the corresponding alternative contexts, and then the emoticons in the target expression set are output in sequence according to the front-back sequence of the sorting in preference to the emoticons in other non-target expression sets.
According to the method for dynamically adjusting the emoticons, the description information of the session is acquired and analyzed in the process that the user carries out the session through the instant messaging client, the context of the current session is determined according to the analysis result, then the appropriate emoticons are matched according to the context and output in preference to other emoticons, so that the output emoticons are more in line with the actual requirements of the user, the times of searching the emoticons by turning pages of the user can be reduced, the operation of inputting the emoticons is simplified, the convenience of inputting is improved, and the emoticons are adjusted more intelligently.
Please refer to fig. 4, which is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device can be used for implementing the method for dynamically adjusting the emoticons provided by the embodiment shown in fig. 2. As shown in fig. 4, the electronic device mainly includes:
a context module 401, an acquisition module 402 and an output module 403;
a context module 401, configured to analyze description information of a current session in an instant messaging client, and determine a context of the current session;
an obtaining module 402, configured to select a target expression set corresponding to a context of a current session from multiple sets of alternative expression sets when an emoticon input operation is triggered;
and an output module 403, configured to output the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
It should be noted that, in the embodiment of the electronic device illustrated in fig. 4, the division of the functional modules is only an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, for example, configuration requirements of corresponding hardware or convenience of implementation of software, that is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the functions described above. In practical applications, the corresponding functional modules in this embodiment may be implemented by corresponding hardware, or may be implemented by corresponding hardware executing corresponding software. The above description principles can be applied to various embodiments provided in the present specification, and are not described in detail below.
For a specific process of each function module in the electronic device provided in this embodiment to implement each function, please refer to the specific content described in the embodiment shown in fig. 2, which is not described herein again.
According to the electronic device provided by the embodiment, in the process that a user carries out conversation through the instant messaging client, the description information of the conversation is acquired and analyzed, the context of the current conversation is determined according to the analysis result, then the appropriate emoticon is matched according to the context and is output in preference to other emoticons, so that the output emoticon is more in line with the actual requirements of the user, the frequency of searching the emoticon by turning a page of the user can be reduced, the operation of inputting the emoticon is simplified, the convenience of inputting is improved, and the adjustment of the emoticon is more intelligent.
Please refer to fig. 5, which is a schematic structural diagram of an electronic device according to another embodiment of the present application. The electronic device can be used for implementing the method for dynamically adjusting the emoticons provided by the embodiments shown in fig. 2 and 3. Unlike the electronic device shown in fig. 4, in the present embodiment,
further, the context module 401 is further configured to analyze description information of the historical session to obtain a plurality of different alternative contexts;
the apparatus, still further includes:
an extracting module 501, configured to extract emoticons appearing in each of the candidate contexts respectively;
a generating module 502, configured to generate a candidate expression set corresponding to each candidate context, where the candidate expression set includes emoticons appearing in the corresponding candidate context.
The context module 401, further includes:
the analysis submodule 4011 is configured to analyze the description information of each historical conversation to obtain a scene characteristic, a mood characteristic of a conversation person, and an atmosphere characteristic of the conversation of each historical conversation;
the classifying submodule 4012 is configured to classify the obtained scene features, the mood features of the conversation person, and the atmosphere features of the conversation to obtain a plurality of different candidate contexts.
Further, the apparatus further comprises:
the sorting module 503 is configured to sort the emoticons in each alternative expression set according to the matching degree between the emoticon and the corresponding alternative context, where the higher the matching degree is, the earlier the sorting is;
further, the output module 403 is further configured to output the emoticons in the target emoticon in priority to the emoticons in the other non-target emoticons in sequence according to the front-back order of the ranking.
Further, the context module 401 is further configured to analyze the description information of the historical session to obtain a plurality of different alternative contexts.
The apparatus may further comprise:
the establishing module 504 is configured to, in response to a setting operation of a user, establish a corresponding relationship between an expression in the expression library and a context pointed by the setting operation in multiple candidate contexts, and obtain multiple groups of candidate expression sets.
Further, the analysis sub-module 4011 is configured to analyze the description information of the current session to obtain a scene characteristic of the current session, a mood characteristic of a conversation person, and an atmosphere characteristic of the session.
Further, the apparatus further comprises:
the determining sub-module 4013 is configured to match the obtained scene features, mood features, and atmosphere features of the current conversation with preset scene features, mood features, and atmosphere features of different candidate contexts, and determine the context with the highest matching degree as the context of the current conversation.
For a specific process of each function module in the electronic device provided in this embodiment to implement each function, please refer to the specific contents described in the embodiments shown in fig. 2 to fig. 4, which is not described herein again.
According to the electronic device provided by the embodiment, in the process that a user carries out conversation through the instant messaging client, the description information of the conversation is acquired and analyzed, the context of the current conversation is determined according to the analysis result, then the appropriate emoticon is matched according to the context and is output in preference to other emoticons, so that the output emoticon is more in line with the actual requirements of the user, the frequency of searching the emoticon by turning a page of the user can be reduced, the operation of inputting the emoticon is simplified, the convenience of inputting is improved, and the adjustment of the emoticon is more intelligent.
Referring to fig. 6, fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic apparatus described in this embodiment includes:
the memory 601, the processor 602, and a computer program stored on the memory 601 and executable on the processor 602, when the processor 602 executes the computer program, the method for dynamically adjusting emoticons described in the embodiments of fig. 2 and 3 is implemented.
Further, the electronic device further includes:
at least one input device 603 and at least one output device 604.
The memory 601, the processor 602, the input device 603, and the output device 604 are connected by a bus 605.
The input device 603 may be a camera, a touch panel, a physical button, a mouse, or the like. The output device 604 may be embodied as a display screen.
The Memory 601 may be a high-speed Random Access Memory (RAM) Memory, or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 601 is used for storing a set of executable program code, and the processor 602 is coupled to the memory 601.
Further, an embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium may be provided in an electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory in the foregoing embodiment shown in fig. 6. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of dynamically adjusting emoticons as described in the embodiments of fig. 2 and 3. Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed electronic device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above description is provided for the method, the electronic device and the computer-readable storage medium for dynamically adjusting emoticons, and for those skilled in the art to change the embodiments and the application scope according to the idea of the embodiments of the present application, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. A method for dynamically adjusting emoticons is applied to an electronic device, and is characterized in that the method comprises the following steps:
analyzing the description information of the current session in the instant messaging client, and determining the context of the current session, wherein the description information of the session comprises the content information of the session and the auxiliary description information of the session, and the auxiliary description information of the session comprises the identity of a session party and the time when the session occurs;
when the emoticon input operation is triggered, selecting a target emoticon corresponding to the context of the current conversation from multiple groups of alternative emoticons;
and outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
2. The method of dynamically adjusting emoticons according to claim 1, wherein analyzing the description information of the current session in the instant messaging client and determining the context of the current session further comprises:
analyzing the description information of the historical conversation to obtain a plurality of different alternative contexts;
extracting the appearing emoticons in the alternative contexts respectively;
and generating alternative expression sets corresponding to the alternative contexts respectively, wherein the alternative expression sets comprise the expression symbols appearing in the corresponding alternative contexts.
3. The method of dynamically adjusting emoticons according to claim 2, wherein analyzing the description information of the historical conversation for a plurality of different alternative contexts comprises:
analyzing the description information of each historical conversation to obtain scene characteristics of each historical conversation, mood characteristics of conversation persons and atmosphere characteristics of the conversation;
and classifying the obtained scene characteristics, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation to obtain a plurality of different alternative contexts.
4. The method of dynamically adjusting emoticons according to claim 2, wherein after generating the candidate emoticon corresponding to each of the candidate contexts, further comprising:
sorting the emoticons in each alternative expression set according to the matching degree between the emoticons and the corresponding alternative contexts, wherein the higher the matching degree is, the closer the sorting is;
then, the outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons includes:
and outputting the emoticons in the target emoticon in priority to the emoticons in the other non-target emoticons in sequence according to the front and back order of the sequence.
5. The method of dynamically adjusting emoticons according to claim 1, wherein analyzing the description information of the current session in the instant messaging client and determining the context of the current session further comprises:
analyzing the description information of the historical conversation to obtain a plurality of different alternative contexts;
and responding to the setting operation of the user, establishing a corresponding relation between the expression in the expression library and the context pointed by the setting operation in the multiple candidate contexts, and obtaining multiple groups of candidate expression sets.
6. The method of dynamically adjusting emoticons according to claim 1, wherein analyzing the description information of the current session in the instant messaging client to determine the context of the current session comprises:
analyzing the description information of the current conversation to obtain scene characteristics of the current conversation, mood characteristics of a conversation person and atmosphere characteristics of the conversation;
matching the obtained scene features, the mood features and the atmosphere features of the conversation with preset scene features, mood features and atmosphere features of different alternative contexts, and determining the context with the highest matching degree as the context of the current conversation.
7. An electronic device, comprising:
the context module is used for analyzing the description information of the current session in the instant messaging client and determining the context of the current session, wherein the description information of the session comprises the content information of the session and the auxiliary description information of the session, and the auxiliary description information of the session comprises the identity of a session party and the time when the session occurs;
the obtaining module is used for selecting a target expression set corresponding to the context of the current conversation from multiple groups of alternative expression sets when the expression symbol input operation is triggered;
and the output module is used for outputting the emoticons in the target emoticon in preference to the emoticons in other non-target emoticons.
8. The electronic device of claim 7,
the context module is also used for analyzing the description information of the historical conversation to obtain a plurality of different alternative contexts;
the device, still include:
the extraction module is used for respectively extracting the appearing emoticons in the alternative contexts;
and the generating module is used for generating a candidate expression set corresponding to each candidate context, and the candidate expression sets comprise the expression symbols appearing in the corresponding candidate contexts.
9. The electronic device of claim 8, wherein the context module further comprises:
the analysis submodule is used for analyzing the description information of each historical conversation to obtain the scene characteristics, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation of each historical conversation;
the classification submodule is used for classifying the obtained scene characteristics, the mood characteristics of the conversation person and the atmosphere characteristics of the conversation to obtain a plurality of different alternative contexts;
the device further comprises:
the sequencing module is used for sequencing the emoticons in each alternative expression set according to the matching degree between the emoticons and the corresponding alternative contexts, wherein the higher the matching degree is, the higher the sequencing is;
and the output module is further configured to output the emoticons in the target emoticon in priority to the emoticons in the other non-target emoticons in sequence according to the front-back order of the ranking.
10. The electronic device of claim 7, wherein the context module is further configured to analyze descriptive information of the historical conversation to obtain a plurality of different alternative contexts;
the device further comprises:
and the establishing module is used for responding to the setting operation of the user, establishing the corresponding relation between the expression in the expression library and the context pointed by the setting operation in the multiple candidate contexts, and obtaining multiple groups of candidate expression sets.
11. The electronic device of claim 9,
the analysis submodule is used for analyzing the description information of the current conversation to obtain scene characteristics of the current conversation, mood characteristics of a conversation person and atmosphere characteristics of the conversation;
the device further comprises:
and the determining submodule is used for matching the obtained scene characteristics, the mood characteristics and the atmosphere characteristics of the conversation of the current conversation with the preset scene characteristics, the mood characteristics and the atmosphere characteristics of the conversation of different alternative contexts, and determining the context with the highest matching degree as the context of the current conversation.
12. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of dynamically adjusting emoticons according to any one of claims 1 to 6 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of dynamically adjusting emoticons according to any one of claims 1 to 6.
CN201810064237.4A 2018-01-23 2018-01-23 Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium Active CN108255316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810064237.4A CN108255316B (en) 2018-01-23 2018-01-23 Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810064237.4A CN108255316B (en) 2018-01-23 2018-01-23 Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108255316A CN108255316A (en) 2018-07-06
CN108255316B true CN108255316B (en) 2021-09-10

Family

ID=62742441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810064237.4A Active CN108255316B (en) 2018-01-23 2018-01-23 Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108255316B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858099B (en) * 2018-08-20 2024-04-12 北京搜狗科技发展有限公司 Candidate word generation method and device
CN109120866B (en) * 2018-09-27 2020-04-03 腾讯科技(深圳)有限公司 Dynamic expression generation method and device, computer readable storage medium and computer equipment
CN109842546B (en) * 2018-12-25 2021-09-28 创新先进技术有限公司 Conversation expression processing method and device
CN110471589A (en) * 2019-07-29 2019-11-19 维沃移动通信有限公司 Information display method and terminal device
CN110609723B (en) * 2019-08-21 2021-08-24 维沃移动通信有限公司 Display control method and terminal equipment
CN110674330B (en) * 2019-09-30 2024-01-09 北京达佳互联信息技术有限公司 Expression management method and device, electronic equipment and storage medium
CN110717109B (en) * 2019-09-30 2024-03-15 北京达佳互联信息技术有限公司 Method, device, electronic equipment and storage medium for recommending data
CN110971424B (en) * 2019-11-29 2021-10-29 广州市百果园信息技术有限公司 Message processing method, device and system, computer equipment and storage medium
CN114693827A (en) * 2022-04-07 2022-07-01 深圳云之家网络有限公司 Expression generation method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN106021599A (en) * 2016-06-08 2016-10-12 维沃移动通信有限公司 Emotion icon recommending method and mobile terminal
CN106484139A (en) * 2016-10-19 2017-03-08 北京新美互通科技有限公司 Emoticon recommends method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
US20170052946A1 (en) * 2014-06-06 2017-02-23 Siyu Gu Semantic understanding based emoji input method and device
CN106021599A (en) * 2016-06-08 2016-10-12 维沃移动通信有限公司 Emotion icon recommending method and mobile terminal
CN106484139A (en) * 2016-10-19 2017-03-08 北京新美互通科技有限公司 Emoticon recommends method and device

Also Published As

Publication number Publication date
CN108255316A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108255316B (en) Method for dynamically adjusting emoticons, electronic device and computer-readable storage medium
JP6431119B2 (en) System and method for input assist control by sliding operation in portable terminal equipment
CN106484266B (en) Text processing method and device
US11194863B2 (en) Searching method and apparatus, device and non-volatile computer storage medium
US20210049354A1 (en) Human object recognition method, device, electronic apparatus and storage medium
US10515289B2 (en) System and method of generating a semantic representation of a target image for an image processing operation
US10678878B2 (en) Method, device and storing medium for searching
CN106470110A (en) Method and device to the multiple user's pocket transmission news in user list
CN109656444B (en) List positioning method, device, equipment and storage medium
WO2014187233A1 (en) Method,device and storing medium for searching
CN112462990A (en) Image sending method and device and electronic equipment
CN112787907A (en) Display method and device and electronic equipment
CN112612391A (en) Message processing method and device and electronic equipment
CN104267867A (en) Content input method and device
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN111291184A (en) Expression recommendation method, device, equipment and storage medium
CN114168798A (en) Text storage management and retrieval method and device
CN112416212A (en) Program access method, device, electronic equipment and readable storage medium
CN112596617A (en) Message content input method and device and electronic equipment
CN109120783A (en) Information acquisition method and device, mobile terminal and computer readable storage medium
CN106572233B (en) A kind of message treatment method and device
CN110262864B (en) Application processing method and device, storage medium and terminal
CN112818094A (en) Chat content processing method and device and electronic equipment
CN112286613A (en) Interface display method and interface display device
CN113704596A (en) Method and apparatus for generating a set of recall information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant