CN109213332B - Input method and device of expression picture - Google Patents

Input method and device of expression picture Download PDF

Info

Publication number
CN109213332B
CN109213332B CN201710518252.7A CN201710518252A CN109213332B CN 109213332 B CN109213332 B CN 109213332B CN 201710518252 A CN201710518252 A CN 201710518252A CN 109213332 B CN109213332 B CN 109213332B
Authority
CN
China
Prior art keywords
picture
expression
pictures
text information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710518252.7A
Other languages
Chinese (zh)
Other versions
CN109213332A (en
Inventor
费腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201710518252.7A priority Critical patent/CN109213332B/en
Publication of CN109213332A publication Critical patent/CN109213332A/en
Application granted granted Critical
Publication of CN109213332B publication Critical patent/CN109213332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

The embodiment of the invention provides an input method and device of an expression picture, wherein the method comprises the following steps: receiving input first text information; according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture; according to the first expression picture on the screen, the weight value of the associated picture is adjusted; and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value. According to the embodiment of the invention, the weight value of the associated picture can be adjusted according to the first expression picture which is displayed on the screen, so that the possibility that the associated pictures are sequenced in the front is increased, a user can use the associated pictures conveniently, and the input efficiency of the user is improved.

Description

Input method and device of expression picture
Technical Field
The invention relates to the technical field of input methods, in particular to an input method of an expression picture and an input device of the expression picture.
Background
By means of continuous development of social contact and networks, communication modes among people are changed correspondingly, and the communication from the earliest characters to the beginning of using some simple symbols gradually develops into increasingly diversified expression culture. For example, some homemade, popular element pictures are used for communication. In order to facilitate the use of users, the existing input methods all provide a picture expression function to enrich the input embodiment of the users. As the application of the emoticon function in daily chatting becomes more frequent, the user pays more attention to the ordering result of the emoticon.
Currently, the input method is similar to the normal vocabulary candidate ranking in the ranking of the expression pictures, and is mainly determined by referring to the frequency and time of each picture used by the user. For example, if the user often selects a certain picture one from the candidate emoticons after inputting the word "haha", when the user inputs "haha" again, the pictures will be sorted in the candidate emoticons, so that the user can find and display the picture one quickly.
However, according to the above method, the input method can only adjust the ordering of the pictures that have been used by the user, i.e., picture one, but the ordering of the pictures that have not been used by the user (e.g., picture two and picture three) cannot be adjusted, so that the pictures that the user may be interested in cannot be ordered at a front position, when the user needs to use picture two or picture three, the user needs to find the desired picture even by turning pages backwards for many times, which increases the time for the user to find the expression picture and reduces the input efficiency of the user.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an input method of an emoticon and a corresponding input apparatus of an emoticon, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses an input method for an expression picture, including:
receiving input first text information;
according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture has an associated picture;
according to the first expression picture on the screen, the weight value of the associated picture is adjusted;
and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
Optionally, the associated picture of the first emoticon is determined by the following steps:
calculating the similarity between the first expression picture and other expression pictures;
and extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
Optionally, the step of calculating the similarity between the first expression picture and other expression pictures includes:
acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a score of each user on each expression picture;
generating a feature vector of each expression picture according to the user feature score of each expression picture;
and calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
Optionally, the step of obtaining the user feature score of each expression picture includes:
setting an initial feature score of each expression picture;
when a user screens any expression picture, the initial feature value corresponding to the displayed expression picture is increased progressively;
and taking the increased score as the user characteristic score of the facial expression picture on the screen.
Optionally, the step of calculating the similarity between the first expression picture and the other expression pictures according to the feature vector includes:
and respectively calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures, and taking the distances as the similarity between the first expression picture and other expression pictures.
Optionally, the step of adjusting the weight value of the associated picture according to the first emoticon on the screen includes:
respectively determining the weight values of a plurality of associated pictures corresponding to a first expression picture on a screen;
and increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
Optionally, when second text information corresponding to the associated picture is received, the step of displaying the associated picture according to the weight value includes:
when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values of the second expression pictures, wherein the plurality of second expression pictures comprise the associated picture;
sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and displaying the sequenced associated pictures and the plurality of second expression pictures.
In order to solve the above problems, an embodiment of the present invention discloses an input device for an expression picture, including:
the receiving module is used for receiving input first text information;
the screen-on module is used for displaying a first expression picture corresponding to the first text information on a screen according to the first text information, and the first expression picture is related to a picture;
the adjusting module is used for adjusting the weight value of the associated picture according to the first expression picture on the screen;
and the display module is used for displaying the associated picture according to the weight value when receiving second text information corresponding to the associated picture.
Optionally, the associated picture of the first emoticon is determined by invoking the following modules:
the calculation module is used for calculating the similarity between the first expression picture and other expression pictures;
and the extraction module is used for extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
Optionally, the calculation module comprises:
the user characteristic score acquisition sub-module is used for acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a score value of each user on each expression picture;
the feature vector generation submodule is used for generating a feature vector of each expression picture according to the user feature score of each expression picture;
and the similarity operator module is used for calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
Optionally, the user feature score obtaining sub-module includes:
the initial feature score setting unit is used for setting the initial feature score of each expression picture;
the initial feature score increasing unit is used for increasing the initial feature score corresponding to the displayed expression picture when the user displays any expression picture;
and the characteristic score determining unit is used for taking the increased score as the user characteristic score of the facial expression picture on the screen.
Optionally, the similarity operator module comprises:
and the similarity calculation unit is used for calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures respectively, and taking the distances as the similarities between the first expression picture and the other expression pictures.
Optionally, the adjusting module includes:
the weight value determining submodule is used for respectively determining the weight values of a plurality of associated pictures corresponding to the first expression picture on the screen;
and the target weight value obtaining submodule is used for increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
Optionally, the presentation module comprises:
the obtaining sub-module is used for obtaining a plurality of second expression pictures corresponding to the second text information and weight values thereof when second text information corresponding to the associated picture is received, wherein the second expression pictures comprise the associated picture;
the sorting submodule is used for sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and the display sub-module is used for displaying the sorted associated pictures and the second expression pictures.
In order to solve the above problems, an embodiment of the present invention discloses an input device of an emoticon, which includes a memory and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for:
receiving input first text information;
according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture;
according to the first expression picture on the screen, the weight value of the associated picture is adjusted;
and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
In order to solve the above problems, an embodiment of the present invention discloses a storage medium, and when instructions in the storage medium are executed by a processor of a terminal, the terminal is enabled to execute one or more of the above methods for inputting an emoticon.
Compared with the background art, the embodiment of the invention has the following advantages:
according to the embodiment of the invention, after the input first text information is received and the corresponding first expression picture is displayed on the screen according to the first text information, the weight value of the associated picture of the first expression picture can be adjusted, and if the second text information corresponding to the associated picture is received, the associated picture can be displayed according to the adjusted weight value. According to the embodiment of the invention, the weight value of the associated picture can be adjusted according to the first expression picture which is displayed on the screen, so that the possibility that the associated pictures are sequenced in the front is increased, a user can use the associated pictures conveniently, and the input efficiency of the user is improved.
Drawings
FIG. 1 is a flowchart illustrating a first embodiment of a method for inputting an emoticon according to the present invention;
FIG. 2 is a flowchart illustrating steps of a second embodiment of an emotion image input method according to the present invention;
FIG. 3 is a block diagram of an embodiment of an input device for an emoticon according to the present invention;
fig. 4 is a block diagram illustrating an input apparatus for an emoticon according to an exemplary embodiment.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating a first step of an embodiment of an input method for an expression picture according to the present invention is shown, which may specifically include the following steps:
step 101, receiving input first text information;
in a specific implementation, the embodiment of the present invention may be applied to various terminals, for example, a mobile phone, a PDA (Personal Digital Assistant), a computer, a palm computer, and the like, and the embodiment of the present invention does not limit the specific type of the terminal. These terminals can support various types of operating systems including Windows, android, IOS, windows phone, and the like.
Typically, a user may enter input through an external input device, such as a keyboard; text input may also be performed by running an application that inputs through a virtual keyboard, such as an input method program.
Taking a computer as an example, a user may perform input by tapping physical keys on a keyboard, and for a mobile terminal with a touch screen, the user may perform input by clicking virtual keys on a virtual keyboard, which is not limited in the embodiment of the present invention.
In general, in a speech such as chinese and japanese, characters such as chinese and japanese as basic language units are not directly mapped to keys on a keyboard, and therefore, conversion between characters and words is generally required at the time of input.
Specifically, the input method system can establish a mapping relationship between characters such as Chinese characters and Japanese and character strings capable of being directly input through a coding rule, for example, the commonly used codes in Chinese are pinyin (such as simple pinyin, double pinyin, full pinyin, fuzzy sound and the like), five strokes and the like.
In the embodiment of the present invention, the first text information may be a word obtained by converting a character string input by a user. For example, when a user inputs a character string "haha", after the character string is converted by an input method, the character string "haha" actually input by the user can be considered as a word "haha"; when the character string input by the user is "xiao", the word actually input by the user can be considered to be "smiling" after conversion.
102, according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is provided with a related picture;
at present, in order to meet the personalized input requirements of users, an input method provides a function of picture expression. The user can input the character string and display the expression picture matched with the character string.
In this embodiment of the present invention, the first emoticon may be an emoticon corresponding to a word currently input by the user. The number of the expression pictures corresponding to each word input by the user may be only one, or two or more. When the user selects one of the emoticons, the input method can screen the emoticon.
For example, for a word "haha" input by the user, the input method may present an emoticon a and a picture B matching the word in a candidate box, and when the user selects the picture a, the input method may display the picture a.
In the embodiment of the present invention, each first emoticon may have a corresponding associated emoticon, and the associated emoticon may be an emoticon having a higher similarity to the first emoticon.
It should be noted that each first emoticon may have more than one associated emoticon, and the associated emoticon may not be an emoticon corresponding to a word currently input by the user.
For example, for a first expression picture a corresponding to the word "haha", there may be a related picture C and a related picture D, and the related picture C and the related picture D may not be an expression picture corresponding to the word "haha", but may be expression pictures corresponding to other words, for example, the related picture C may be an expression picture corresponding to the word "smile", which is not limited in the embodiment of the present invention.
103, adjusting the weight value of the associated picture according to the first expression picture on the screen;
in the embodiment of the present invention, after the first emoticon is displayed on the screen, it may be determined which emoticons are included in the associated emoticon corresponding to the first emoticon, and then the weight value of each associated emoticon is adjusted.
In particular implementations, the weight value of each associated picture may be increased.
For example, initially, the expression picture C and the picture E corresponding to the word "smile" each have a corresponding weight value, and if the weight value of the picture E is greater than that of the picture C, since the picture C is a related picture of the first expression picture a that has been previously displayed, at this time, the weight value of the picture C may be correspondingly increased, and the weight value of the picture E is not changed, so that the weight value of the adjusted picture C may be greater than that of the picture E.
And 104, displaying the associated picture according to the weight value when second text information corresponding to the associated picture is received.
In the embodiment of the present invention, the second text information may be a word matching a certain associated picture, for example, a "smile" matching the associated picture C.
Similarly to the first text information, the second text information may also be a word obtained by converting a character string input by the user, for example, the character string input by the user is "xiao", and after the character string is converted by the input method, "smile" is obtained and is used as the second text information.
It should be noted that, because the associated picture of the first expression picture is an expression picture matched with the second text message, the second expression picture of the second text message includes the associated picture. For example, for the word "smile," the second emoticon may include picture C and picture E.
In specific implementation, after the weight value of each associated picture is adjusted, the second expression pictures can be sorted according to the weight value, and the sorted second expression pictures including the associated pictures are displayed in the candidate frame for selection by the user.
For example, for the second expression picture C and the picture E corresponding to the word "smile", if the determined weight value is that the picture C is larger than the picture E, the picture C may be sorted before the picture E and displayed to the user.
In the embodiment of the invention, when an instruction for selecting any one of the second emoticons is received, the selected second emoticon can be displayed on a screen. For example, when the user selects picture C, the picture C may be displayed on the screen.
In the embodiment of the invention, after the input first text information is received and the first expression picture corresponding to the first text information is displayed on the screen, the weight value of the associated picture of the first expression picture can be adjusted, and if the second text information corresponding to the associated picture is received, the associated picture can be displayed according to the adjusted weight value. According to the embodiment of the invention, the weight value of the associated picture can be adjusted according to the first expression picture which is displayed on the screen, so that the possibility that the associated pictures are sequenced in the front is increased, a user can use the associated pictures conveniently, and the input efficiency of the user is improved.
Referring to fig. 2, a flowchart illustrating steps of a second embodiment of an expression picture input method according to the present invention is shown, which may specifically include the following steps:
step 201, receiving input first text information;
generally, a user can input text information through an input method, and the input method can be applied to terminal equipment such as a mobile phone, a computer, a tablet computer and the like.
In the embodiment of the present invention, the first text information may be a word obtained by converting a character string input by a user. For example, when a user inputs a character string "haha", after the character string is converted by an input method, the character string "haha" actually input by the user can be considered as a word "haha"; when the character string input by the user is "xiao", the word actually input by the user can be considered to be "smiling" after conversion.
Step 202, according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen;
in this embodiment of the present invention, the first emoticon may be an emoticon corresponding to a word currently input by the user. Only one expression picture or two or more expression pictures corresponding to each word input by the user can be provided, and the number of the first expression pictures is not limited in the embodiment of the invention. When the user selects one of the emoticons, the input method can screen the emoticon.
For example, for the word "haha" input by the user, the input method may present an emoticon picture a and a picture B matching the word in a candidate box, and when the user selects the picture a, the input method may display the picture a.
Step 203, calculating the similarity between the first expression picture and other expression pictures;
in the embodiment of the present invention, in order to obtain other expression pictures with higher similarity of the first expression picture on the screen, a user feature score of each expression picture may be first obtained, where the user feature score may be a score of each user on each expression picture.
It should be noted that, because the difference of the use condition of each emoticon by each user is large, the user feature score of each emoticon is different according to the difference of users. For example, for the same emoticon, the user feature scores may be different for different three users.
In a specific implementation, an initial feature score of each expression picture may be set first, when a user screens any expression picture, the initial feature score corresponding to the displayed expression picture is incremented, and the incremented score is used as the user feature score of the displayed expression picture.
For example, the initial feature score may be set to be 0, the initial feature score of each expression picture is 0, when a user screens one expression picture, 1 may be added to the feature score of the displayed expression picture, and then the final numerical result in a certain period of time is used as the user feature score of the expression picture. Of course, those skilled in the art may also set the user feature score of each expression picture in other manners, which is not limited in the embodiment of the present invention.
It should be noted that, in practice, the usage of each emoticon by each user may be simulated by the client, so as to obtain the user feature score of each emoticon.
In a specific implementation, the input method client may periodically count the user feature score of each expression picture, for example, may count the user feature score of each expression picture every 24 hours, and send the user feature score to the server. Then, the server can generate a feature vector of each expression picture according to the user feature score of each expression picture, and calculate the similarity between the first expression picture and other expression pictures according to the feature vector. Specifically, distances between the feature vectors of the first expression picture and the feature vectors of the other expression pictures can be respectively calculated, and the distances are used as the similarity between the first expression picture and the other expression pictures.
As an example of the present invention, after the feature vector of each expression picture is generated, the similarity between each expression picture and other expression pictures can be respectively calculated according to the feature vector, and then an expression picture similarity list is generated according to the similarity. Therefore, when the similarity between the first expression picture and other expression pictures needs to be calculated, the similarity between the first expression picture and other expression pictures is directly extracted from the similarity list.
Specifically, a large matrix of N × M may be maintained at the server, where N is the number of all users, and M is the total number of all emoticons, and the matrix may be represented as follows:
Figure RE-GDA0003778037660000101
wherein pic1 and pic 2 … … pic M represent M emoticons, user1 and user2 … … userN represent N users, S11 represents the feature score of the emoticon pic1 relative to the user1, S12 represents the feature score of the emoticon pic 2 relative to the user1, SNM represents the feature score of the emoticon pic M relative to the user userN, and so on.
In the matrix, each row represents the feature scores of one emoticon corresponding to all users, and each column represents the feature scores of all emoticons corresponding to one user. Therefore, when the similarity between the expression pictures is calculated, the feature score of each line is used as a vector to represent the N features of the expression picture, and the similarity between the two expression pictures is calculated and obtained by calculating the distance between the two vectors. When calculating the distance between two vectors, a Pearson product-displacement correlation coefficient (Pearson product-displacement correlation coefficient) or other correlation coefficient calculation methods may be adopted, which is not limited in the embodiment of the present invention.
It should be noted that the frequency of calculating the similarity between the expression pictures does not need to be too frequent, and the calculation may be performed every 5 days, and the calculation result is stored in the server. Of course, a person skilled in the art may specifically determine the period of uploading the feature score and calculating the similarity according to actual needs, which is not limited in the embodiment of the present invention.
Step 204, extracting a plurality of expression pictures with the similarity within a preset threshold range as associated pictures of the first expression picture;
in the embodiment of the invention, after the similarity between each first expression picture and other expression pictures is calculated, the expression picture with the similarity within the preset threshold range can be used as the associated picture of the first expression picture. For example, a preset threshold range may be set to be 75% -100%, and when the similarity between a certain expression picture and a certain first expression picture is within the range, the expression picture may be taken as a related picture of the first expression picture.
It should be noted that each first emoticon may have at least one associated emoticon, and the associated emoticon may be one of a plurality of first emoticons corresponding to the first text information currently input by the user, or another emoticon. On the other hand, the associated pictures corresponding to any two first expression pictures may be the same, for example, the first expression picture corresponding to the word "haha" includes a picture a and a picture B, the associated picture of the picture a may be a picture C and a picture D, the associated picture of the picture B may be a picture C and a picture F, and both of them have the same associated picture, i.e., a picture C.
Step 205, adjusting the weight value of the associated picture according to the first expression picture on the screen;
in the embodiment of the invention, after a certain first expression picture is displayed on a screen of a user, the weight values of a plurality of associated pictures corresponding to the displayed first expression picture can be respectively determined; and then, increasing the weight values of the plurality of associated pictures to obtain a target weight value of each associated picture.
For example, for picture a, picture B, picture C, and picture D, where picture C is the associated picture of picture a. Therefore, after the user screens the picture a according to the input first text information, the weight value of the picture C can be correspondingly increased to obtain the target weight value of the picture C, and the weight values of other pictures, namely the picture B and the picture D, are not changed. The embodiment of the present invention is not limited to the specific magnitude of the increase of the weight value.
Step 206, when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values thereof, wherein the plurality of second expression pictures comprise the associated picture;
in the embodiment of the present invention, the second text information may be a word matching a certain associated picture, for example, a "smile" matching the associated picture C.
Similarly to the first text information, the second text information may also be a word obtained by converting a character string input by the user, for example, the character string input by the user is "xiao", and after the character string is converted by the input method, "smile" is obtained and is used as the second text information.
In the embodiment of the present invention, after receiving the second text message, it may be determined which pictures the second emoticon corresponding to the second text message includes. It should be noted that, because the associated picture of the first expression picture is an expression picture matched with the second text message, the second expression picture of the second text message includes the associated picture.
For example, for the second text message "smile", a plurality of second expression pictures corresponding to the "smile" may be first determined, such as picture C, picture E, picture G, and picture H, where picture C is an associated picture of the first expression picture a.
In the embodiment of the invention, each second expression picture has a corresponding weight value, and the input method can sort each second expression picture according to the weight value by default.
For example, in descending order, the weight values of picture C, picture E, picture G, and picture H may be: picture E > picture G > picture C > picture H.
Step 207, sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
for the picture C, the picture E, the picture G, and the picture H, the respective weighted value information is sequentially ordered from large to small: the picture E > the picture G > the picture C > the picture H, so that the four expression pictures can be sequenced by the input method according to the sequence of the picture E, the picture G, the picture C and the picture H under the default condition. After the weight values of the associated pictures C are adjusted, that is, the weight values of the pictures C are increased, the pictures can be sorted again according to the adjusted weight values of the pictures.
For example, if the weight value of the picture C is increased and is larger than the weight value of the picture G but still smaller than the weight value of the picture E, the adjusted weight values of the pictures are sequentially ordered from large to small as: the picture E > the picture C > the picture G > the picture H, and at this time, the four expression pictures can be sequenced according to the sequence of the picture E, the picture C, the picture G and the picture H.
Or, after the weight value of the picture C is increased, the weight value of the picture C is not only greater than that of the picture G, but also greater than that of the picture E, and then the weight values of the adjusted pictures are sequentially ordered from large to small: and the four expression pictures can be sequenced according to the sequence of the picture C, the picture E, the picture G and the picture H.
Step 208, displaying the sorted associated pictures and the plurality of second expression pictures;
after adjusting the weight value of the associated picture and sorting each second expression picture including the associated picture according to the adjusted weight value, the input method may present the associated picture and a plurality of other second expression pictures in a candidate frame of the input method according to the sorting order.
Therefore, when an instruction of selecting any second expression picture by the user is received, the selected second expression picture can be displayed on the screen.
For example, after the four emoticons are sorted in the order of picture C, picture E, picture G, and picture H, when an instruction that the user selects picture C is received, the picture C may be displayed.
It should be noted that, after the expression pictures are sorted and displayed, the expression picture selected by the user may not be the expression picture sorted at the first position, or may not be the associated picture with the weight value adjusted, that is, the expression picture selected by the user may not be the associated picture C with the adjusted order, or may be another picture, for example, a picture E, a picture G, or a picture H, which is not limited in the embodiment of the present invention.
For convenience of understanding, the following describes an input method of an emoticon according to an embodiment of the present invention with a specific example.
(1) Firstly, obtaining the user characteristic score of each expression picture. The client side can determine the user characteristic score of each expression picture according to the use condition of the user on each expression picture. Specifically, the initial feature score of each expression picture may be set to be 0, 1 is added to the feature score of the expression picture when the current user screens one expression picture, and the initial feature score is kept unchanged and still set to be 0 for the expression picture that the current user does not screen. And uploading the feature scores of all the expression pictures to the server every 24 hours by the client.
(2) A large matrix of N × M may be maintained at the server for storing feature scores of the emoticons uploaded by the users, where N is the number of all users and M is the total number of all emoticons. In the matrix, each row represents the feature scores of one emoticon corresponding to all users, and each column represents the feature scores of all emoticons corresponding to one user. Then, the feature score of each line is used as a vector to represent N features of the expression picture, and the similarity between any two vectors is calculated once every 5 days by adopting the Pearson correlation coefficient and is used as the similarity between the two corresponding expression pictures. And for each expression picture, respectively selecting a plurality of expression pictures with highest similarity as associated pictures according to the similarity. For example, the associated pictures of the picture a include a picture C and a picture D, wherein the text information corresponding to the picture C is smiling. Meanwhile, the expression picture corresponding to smile also comprises a picture E, a picture G and a picture H, and the initial weight values of the picture C, the picture E, the picture G and the picture H are sequentially as follows: picture E > picture G > picture C > picture H.
(3) When a user inputs a character string "haha", the character string "haha" can be considered as actually input by the user through the conversion of the input method, and at the moment, the input method can display expression pictures corresponding to the word "haha", namely picture a and picture B, to the user in the candidate box. If the user selects the picture a, the picture a can be displayed on the screen, and the weight values of the associated pictures of the picture a are adjusted, that is, the weight values of the picture C and the picture D are adjusted. After the weight value of the picture C is adjusted, the weight values of the picture C, the picture E, the picture G and the picture H are sequentially as follows from large to small: picture C > picture E > picture G > picture H.
(4) When the user inputs smile, the input method can find out that the expression pictures corresponding to the smile comprise a picture C, a picture E, a picture G and a picture H, and then sequence and display the four expression pictures according to the sequence of the adjusted weight values from large to small, namely display according to the sequence of the picture C, the picture E, the picture G and the picture H. When the user selects any one of the emoticons, the selected emoticon can be displayed on the screen.
It should be noted that for simplicity of description, the method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 3, a block diagram of an embodiment of an input device for an expression picture according to the present invention is shown, and may specifically include the following modules:
a receiving module 301, configured to receive input first text information;
the screen-on module 302 is configured to screen-on a first expression picture corresponding to the first text information according to the first text information, where the first expression picture has an associated picture;
the adjusting module 303 is configured to adjust a weight value of the associated picture according to the first expression picture displayed on the screen;
and the displaying module 304 is configured to display the associated picture according to the weight value when receiving the second text information corresponding to the associated picture.
In this embodiment of the present invention, the associated picture of the first expression picture may be determined by invoking the following modules:
the calculating module is used for calculating the similarity between the first expression picture and other expression pictures;
and the extraction module is used for extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
In the embodiment of the present invention, the calculation module may specifically include the following sub-modules:
the user characteristic score acquisition sub-module is used for acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a score value of each user on each expression picture;
the feature vector generation submodule is used for generating a feature vector of each expression picture according to the user feature score of each expression picture;
and the similarity operator module is used for calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
In this embodiment of the present invention, the user feature score obtaining sub-module specifically includes the following units:
the initial feature score setting unit is used for setting the initial feature score of each expression picture;
the initial feature score increasing unit is used for increasing the initial feature score corresponding to the displayed expression picture when the user displays any expression picture;
and the characteristic score determining unit is used for taking the increased score as the user characteristic score of the facial expression picture on the screen.
In the embodiment of the present invention, the similarity operator module may specifically include the following units:
and the similarity calculation unit is used for calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures respectively, and taking the distances as the similarities between the first expression picture and the other expression pictures.
In this embodiment of the present invention, the adjusting module 303 may specifically include the following sub-modules:
the weight value determining submodule is used for respectively determining the weight values of a plurality of associated pictures corresponding to the first expression picture on the screen;
and the target weight value obtaining submodule is used for increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
In this embodiment of the present invention, the presentation module 304 may specifically include the following sub-modules:
the obtaining sub-module is used for obtaining a plurality of second expression pictures corresponding to the second text information and weight values thereof when receiving second text information corresponding to the associated picture, wherein the plurality of second expression pictures comprise the associated picture;
the sorting submodule is used for sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and the display sub-module is used for displaying the sorted associated pictures and the second expression pictures.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 4 is a block diagram illustrating an input apparatus 400 for an emoticon according to an exemplary embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or some of the steps of the method for inputting emoticons as described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 406 provides power to the various components of the device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the apparatus 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components, for performing the above-mentioned method of inputting the emoticons.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above method of inputting an emoticon is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
An input device of an emoticon, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for: receiving input first text information; according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture; according to the first expression picture on the screen, the weight value of the associated picture is adjusted; and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
Optionally, the one or more programs further include instructions for: calculating the similarity between the first expression picture and other expression pictures; and extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
Optionally, the one or more programs further include instructions for: acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a scoring score of each user on each expression picture; generating a feature vector of each expression picture according to the user feature score of each expression picture; and calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
Optionally, the one or more programs further include instructions for: setting an initial characteristic score of each expression picture; when a user screens any expression picture, the initial feature value corresponding to the displayed expression picture is increased progressively; and taking the increased score as the user characteristic score of the facial expression picture on the screen.
Optionally, the one or more programs further include instructions for: and respectively calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures, and taking the distances as the similarity between the first expression picture and other expression pictures.
Optionally, the one or more programs further include instructions for: respectively determining the weight values of a plurality of associated pictures corresponding to a first expression picture on a screen; and increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
Optionally, the one or more programs further include instructions for: when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values of the second expression pictures, wherein the plurality of second expression pictures comprise the associated picture; sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures; and displaying the sequenced associated pictures and a plurality of second expression pictures.
A storage medium in which instructions, when executed by a processor of a terminal, enable the terminal to: receiving input first text information; according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture; according to the first expression picture on the screen, the weight value of the associated picture is adjusted; and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: calculating the similarity between the first expression picture and other expression pictures; and extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a scoring score of each user on each expression picture; generating a feature vector of each expression picture according to the user feature score of each expression picture; and calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: setting an initial characteristic score of each expression picture; when a user screens any expression picture, the initial feature value corresponding to the displayed expression picture is increased progressively; and taking the increased score as the user characteristic score of the facial expression picture on the screen.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: and respectively calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures, and taking the distances as the similarity between the first expression picture and other expression pictures.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: respectively determining the weight values of a plurality of associated pictures corresponding to a first expression picture on a screen; and increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
Optionally, the instructions in the storage medium, when executed by the processor of the terminal, enable the terminal to further perform the following: when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values of the second expression pictures, wherein the plurality of second expression pictures comprise the associated picture; sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures; and displaying the sequenced associated pictures and the plurality of second expression pictures.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or terminal apparatus that comprises the element.
The input method of the expression picture and the input device of the expression picture provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (22)

1. An input method of an expression picture is characterized by comprising the following steps:
receiving input first text information;
according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture; the associated picture is a picture with similarity relation with the first expression picture; the associated picture is an expression picture which does not have a corresponding relation with the first text information;
according to the first expression picture on the screen, the weight value of the associated picture is adjusted;
and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
2. The method of claim 1, wherein the associated picture of the first emoticon is determined by:
calculating the similarity between the first expression picture and other expression pictures;
and extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
3. The method according to claim 2, wherein the step of calculating the similarity between the first emoticon and other emoticons comprises:
acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a scoring score of each user on each expression picture;
generating a feature vector of each expression picture according to the user feature score of each expression picture;
and calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
4. The method according to claim 3, wherein the step of obtaining the user feature score of each emoticon comprises:
setting an initial characteristic score of each expression picture;
when a user uploads any expression picture, the initial feature value corresponding to the displayed expression picture is increased progressively;
and taking the increased score as the user characteristic score of the facial expression picture on the screen.
5. The method according to claim 3, wherein the step of calculating the similarity between the first emoticon and the other emoticons according to the feature vector comprises:
and respectively calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures, and taking the distances as the similarity between the first expression picture and other expression pictures.
6. The method according to claim 1, wherein the step of adjusting the weight value of the associated picture according to the first emoticon displayed on the screen comprises:
respectively determining the weight values of a plurality of associated pictures corresponding to a first expression picture on a screen;
and increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
7. The method according to claim 6, wherein the step of presenting the associated picture according to the weight value when receiving the second text information corresponding to the associated picture comprises:
when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values of the second expression pictures, wherein the plurality of second expression pictures comprise the associated picture;
sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and displaying the sequenced associated pictures and a plurality of second expression pictures.
8. An input device for an expression picture, comprising:
the receiving module is used for receiving input first text information;
the screen-on module is used for displaying a first expression picture corresponding to the first text information on a screen according to the first text information, and the first expression picture is related to a picture; the associated picture is a picture with similarity relation with the first expression picture; the associated picture is an expression picture which does not have a corresponding relation with the first text information;
the adjusting module is used for adjusting the weight value of the associated picture according to the first expression picture on the screen;
and the display module is used for displaying the associated picture according to the weight value when receiving second text information corresponding to the associated picture.
9. The apparatus of claim 8, wherein the associated picture of the first emoticon is determined by invoking the following modules:
the calculation module is used for calculating the similarity between the first expression picture and other expression pictures;
and the extraction module is used for extracting the plurality of expression pictures with the similarity within a preset threshold range to serve as the associated pictures of the first expression picture.
10. The apparatus of claim 9, wherein the computing module comprises:
the user characteristic score acquisition sub-module is used for acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a score value of each user on each expression picture;
the feature vector generation submodule is used for generating a feature vector of each expression picture according to the user feature score of each expression picture;
and the similarity operator module is used for calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
11. The apparatus according to claim 10, wherein the user feature score obtaining sub-module includes:
the initial feature score setting unit is used for setting the initial feature score of each expression picture;
the initial feature score increasing unit is used for increasing the initial feature score corresponding to the displayed expression picture when the user displays any expression picture;
and the characteristic score determining unit is used for taking the increased score as the user characteristic score of the facial expression picture on the screen.
12. The apparatus of claim 10, wherein the similarity operator module comprises:
and the similarity calculation unit is used for calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures respectively, and taking the distances as the similarities between the first expression picture and the other expression pictures.
13. The apparatus of claim 8, wherein the adjustment module comprises:
the weight value determining submodule is used for respectively determining the weight values of a plurality of associated pictures corresponding to the first expression picture on the screen;
and the target weight value obtaining submodule is used for increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
14. The apparatus of claim 13, wherein the presentation module comprises:
the obtaining sub-module is used for obtaining a plurality of second expression pictures corresponding to the second text information and weight values thereof when receiving second text information corresponding to the associated picture, wherein the plurality of second expression pictures comprise the associated picture;
the sorting submodule is used for sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and the display sub-module is used for displaying the sorted associated pictures and the second expression pictures.
15. An input device of an emoticon, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by one or more processors and comprise instructions for:
receiving input first text information;
according to the first text information, a first expression picture corresponding to the first text information is displayed on a screen, and the first expression picture is related to a related picture; the associated picture is a picture with similarity relation with the first expression picture; the associated picture is an expression picture which does not have a corresponding relation with the first text information;
according to the first expression picture on the screen, the weight value of the associated picture is adjusted;
and when second text information corresponding to the associated picture is received, displaying the associated picture according to the weight value.
16. The apparatus of claim 15, wherein the associated picture of the first emoticon is determined by:
calculating the similarity between the first expression picture and other expression pictures;
and extracting a plurality of expression pictures with the similarity within a preset threshold range as the associated pictures of the first expression picture.
17. The apparatus of claim 16, wherein the step of calculating the similarity between the first emoticon and the other emoticons comprises:
acquiring a user characteristic score of each expression picture, wherein the user characteristic score is a score of each user on each expression picture;
generating a feature vector of each expression picture according to the user feature score of each expression picture;
and calculating the similarity between the first expression picture and other expression pictures according to the feature vector.
18. The apparatus of claim 17, wherein the step of obtaining the user feature score of each emoticon comprises:
setting an initial characteristic score of each expression picture;
when a user screens any expression picture, the initial feature value corresponding to the displayed expression picture is increased progressively;
and taking the increased score as the user characteristic score of the facial expression picture on the screen.
19. The apparatus of claim 17, wherein the step of calculating the similarity between the first emoticon and the other emoticons according to the feature vector comprises:
and respectively calculating the distances between the feature vectors of the first expression picture and the feature vectors of other expression pictures, and taking the distances as the similarity between the first expression picture and other expression pictures.
20. The apparatus according to claim 15, wherein the step of adjusting the weight value of the associated picture according to the first emoticon displayed on the screen comprises:
respectively determining the weight values of a plurality of associated pictures corresponding to a first expression picture on a screen;
and increasing the weight values of the plurality of associated pictures to obtain the target weight values of the associated pictures.
21. The apparatus according to claim 20, wherein the step of presenting the associated picture according to the weight when receiving the second text message corresponding to the associated picture comprises:
when second text information corresponding to the associated picture is received, acquiring a plurality of second expression pictures corresponding to the second text information and weight values of the second expression pictures, wherein the plurality of second expression pictures comprise the associated picture;
sorting the associated pictures and the second expression pictures according to the weight values of the second expression pictures and the target weight values of the associated pictures;
and displaying the sequenced associated pictures and the plurality of second expression pictures.
22. A storage medium, characterized in that instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the method of inputting an emoticon according to one or more of method claims 1-7.
CN201710518252.7A 2017-06-29 2017-06-29 Input method and device of expression picture Active CN109213332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710518252.7A CN109213332B (en) 2017-06-29 2017-06-29 Input method and device of expression picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710518252.7A CN109213332B (en) 2017-06-29 2017-06-29 Input method and device of expression picture

Publications (2)

Publication Number Publication Date
CN109213332A CN109213332A (en) 2019-01-15
CN109213332B true CN109213332B (en) 2022-11-08

Family

ID=64960840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710518252.7A Active CN109213332B (en) 2017-06-29 2017-06-29 Input method and device of expression picture

Country Status (1)

Country Link
CN (1) CN109213332B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT399406B (en) * 1992-06-03 1995-05-26 Frequentis Nachrichtentechnik Gmbh TOUCH-SENSITIVE INPUT UNIT
CN100570545C (en) * 2007-12-17 2009-12-16 腾讯科技(深圳)有限公司 expression input method and device
CN103064826B (en) * 2012-12-31 2016-01-06 百度在线网络技术(北京)有限公司 A kind of method, equipment and system for input of expressing one's feelings
CN104777916A (en) * 2014-01-10 2015-07-15 北京搜狗科技发展有限公司 Character input method and system
CN104076944B (en) * 2014-06-06 2017-03-01 北京搜狗科技发展有限公司 A kind of method and apparatus of chatting facial expression input
CN104298429B (en) * 2014-09-25 2018-05-04 北京搜狗科技发展有限公司 A kind of information displaying method and input method system based on input
CN105518678B (en) * 2015-06-29 2018-07-31 北京旷视科技有限公司 Searching method, searcher and user equipment
CN106468984A (en) * 2015-08-11 2017-03-01 阿里巴巴集团控股有限公司 A kind of method of item associations picture rapid preview and device
CN105446495A (en) * 2015-12-08 2016-03-30 北京搜狗科技发展有限公司 Candidate sorting method and apparatus
CN106372059B (en) * 2016-08-30 2018-09-11 北京百度网讯科技有限公司 Data inputting method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于语义的图像检索关键技术研究;徐思敏;《中国优秀硕士学位论文全文数据库信息科技辑》;20130615(第6期);第I138-1506页 *

Also Published As

Publication number Publication date
CN109213332A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109800325B (en) Video recommendation method and device and computer-readable storage medium
JP2017535007A (en) Classifier training method, type recognition method and apparatus
CN112148980B (en) Article recommending method, device, equipment and storage medium based on user click
CN109144285B (en) Input method and device
CN112508612B (en) Method for training advertisement creative generation model and generating advertisement creative and related device
CN110874145A (en) Input method and device and electronic equipment
CN110781813A (en) Image recognition method and device, electronic equipment and storage medium
CN110619357B (en) Picture processing method and device and electronic equipment
CN106446969B (en) User identification method and device
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN112784151B (en) Method and related device for determining recommended information
CN110764627A (en) Input method and device and electronic equipment
CN106447747B (en) Image processing method and device
CN111831132A (en) Information recommendation method and device and electronic equipment
CN112308588A (en) Advertisement putting method and device and storage medium
CN109213332B (en) Input method and device of expression picture
CN112036247A (en) Expression package character generation method and device and storage medium
CN109032374B (en) Candidate display method, device, medium and equipment for input method
CN109213799B (en) Recommendation method and device for cell word bank
CN107765884B (en) Sliding input method and device and electronic equipment
CN117350824B (en) Electronic element information uploading and displaying method, device, medium and equipment
CN112214114A (en) Input method and device and electronic equipment
CN111611030A (en) Data processing method and device and data processing device
CN110413133B (en) Input method and device
CN112527125A (en) Information providing method, device and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant