CN112083811B - Candidate item display method and device - Google Patents

Candidate item display method and device Download PDF

Info

Publication number
CN112083811B
CN112083811B CN201910516909.5A CN201910516909A CN112083811B CN 112083811 B CN112083811 B CN 112083811B CN 201910516909 A CN201910516909 A CN 201910516909A CN 112083811 B CN112083811 B CN 112083811B
Authority
CN
China
Prior art keywords
screen
candidate
target
candidate item
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910516909.5A
Other languages
Chinese (zh)
Other versions
CN112083811A (en
Inventor
崔欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201910516909.5A priority Critical patent/CN112083811B/en
Publication of CN112083811A publication Critical patent/CN112083811A/en
Application granted granted Critical
Publication of CN112083811B publication Critical patent/CN112083811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Abstract

The embodiment of the application discloses a candidate item display method and a candidate item display device, wherein a plurality of corresponding candidate items are determined to comprise target candidate items according to objects to be associated, and a target type label is used for identifying the types of contents included in the target candidate items. The candidate screen parameters of the user corresponding to the object to be associated can identify the possibility that candidates with different types of labels are selected by the user for screen display, and the sub-screen parameters are determined according to the candidate screen parameters and the target type labels of the target candidates, and the sub-screen parameters can identify the possibility that the target candidates with the target type labels are selected by the user for screen display. According to the sub-screen-on parameters, the display positions of the corresponding target candidate items can be adjusted in a targeted manner, for example, when the screen-on possibility is high, the display positions of the target candidate items are adjusted forwards, so that the user can preferentially see the candidate items which are more in line with the input habit of the user, and the probability of the target candidate items being on the screen is improved.

Description

Candidate item display method and device
Technical Field
The application relates to the field of input methods, in particular to a candidate item display method and device.
Background
The input method can be deployed in an intelligent terminal, so that the text input of a user is facilitated. Aiming at the contents such as character strings input by a user or on-screen character strings, the input method can provide corresponding association candidate items comprising the contents, and the user can directly screen the contents in the candidate items by selecting the association candidate items meeting the input purposes, so that the input efficiency is improved.
The candidates have a certain order when displayed, and in the conventional manner at present, the display positions of the candidates are generally determined according to rules such as objects to be associated, contexts and the like, and the display positions of the candidates which are more in line with the rules are adjusted forward so that users can preferentially see the candidates.
However, the current ordering method of the candidate items is difficult to meet the personalized requirements of the user.
Disclosure of Invention
In order to solve the technical problems, the application provides a candidate item display method and device, and the probability of a target candidate item being displayed on a screen is improved.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a candidate display method, where the method includes:
obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
Determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Optionally, the adjusting the display position of the target candidate item according to the sub-screen parameter includes:
and adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter includes:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
And if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
Optionally, the candidate on-screen parameters of the user are obtained according to the following modes:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
Optionally, the candidate operating parameters include any one or more of the following in combination:
the number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
candidate presentation time.
Optionally, the target candidate includes at least one target type tag.
In a second aspect, embodiments of the present application provide a candidate display apparatus, where the apparatus includes an association unit, a determination unit, and an adjustment unit:
the association unit is used for obtaining target candidates corresponding to the objects to be associated according to the objects to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
The determining unit is used for determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
the adjusting unit is used for adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Optionally, the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the adjusting unit is further configured to:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
Optionally, the determining unit is further configured to obtain candidate on-screen parameters of the user according to the following manner:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
Optionally, the candidate operating parameters include any one or more of the following in combination:
the number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
candidate presentation time.
Optionally, the target candidate includes at least one target type tag.
In a third aspect, embodiments of the present application provide an apparatus for candidate presentation, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
Determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Optionally, the processor is further configured to execute the one or more programs to include instructions for:
and adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the processor is further configured to execute the one or more programs to include instructions for:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
And if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
Optionally, the processor is further configured to execute the one or more programs to include instructions for:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
In a fourth aspect, embodiments of the present application provide a machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a candidate presentation method as described in the first aspect.
According to the technical scheme, a plurality of corresponding candidate items can be determined according to the object to be associated, the target candidate item can be one of the candidate items, and the target candidate item is provided with a target type label for identifying the type of the content included in the target candidate item. By acquiring the candidate screen parameters of the user corresponding to the object to be associated in advance, the possibility that the candidate items with different types of labels are selected by the user can be identified, and the sub-screen parameters can be determined according to the candidate item screen parameters and the target type labels of the target candidate items, and the sub-screen parameters can identify the possibility that the target candidate items with the target type labels are selected by the user. Because the candidate on-screen parameters of a user can reflect the preference of the type label of the candidate when the user screens the candidate, the sub-on-screen parameters determined based on the candidate on-screen parameters can reflect the degree to which the type label of the corresponding candidate accords with the preference of the user. Therefore, according to the sub-screen parameters, the display positions of the corresponding target candidates can be adjusted in a targeted manner, for example, when the screen is more likely, the display positions of the target candidates are adjusted forwards, so that the user can preferentially see the candidates which better accord with the input habit of the user, the probability of the target candidates being on the screen is improved, and the input experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a method flowchart of a candidate display method provided in an embodiment of the present application;
fig. 2 is a device structure diagram of a candidate display device according to an embodiment of the present application;
FIG. 3 is a block diagram of a candidate display device according to an embodiment of the present application;
fig. 4 is a block diagram of a server provided in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
In a conventional manner, the display position of the candidate is generally determined according to a rule of an object to be associated, a context, and the like, and the display position of the candidate which is more in line with the rule is adjusted forward so that a user can preferentially view the candidate. However, the presentation rules do not take into account personalized input preferences of different users, resulting in that the actually presented candidates are difficult to conform to the expectations of different users, possibly resulting in a poor input experience for some users.
Therefore, the embodiment of the application provides a candidate item display method, wherein a type label is set for a candidate item through the type of content included in the candidate item. For the candidate items obtained by the object to be associated, the sub-screen parameters of different candidate items can be determined according to the candidate item screen parameters for identifying the possibility that the candidate items of different types of labels are selected by the user corresponding to the object to be associated. And adjusting the ordering positions of the corresponding candidates according to the sub-screen parameters.
Because the candidate on-screen parameters of a user can reflect the preference of the type label of the candidate when the user screens the candidate, the sub-on-screen parameters determined based on the candidate on-screen parameters can reflect the degree to which the type label of the corresponding candidate accords with the preference of the user. Therefore, according to the screen display possibility reflected by the sub-screen display parameters, the display position of the target candidate item can be adjusted in a targeted manner, for example, when the screen display possibility is high, the display position of the target candidate item is adjusted forwards, so that the user can preferentially see the candidate item which better accords with the input habit of the user, the probability that the target candidate item is displayed is improved, and the input experience of the user is improved.
The candidate ranking method provided by the embodiment of the application can be applied to various processing devices capable of deploying the input method, and the processing devices can be terminal devices or servers. When the processing device is a terminal device, the terminal device may specifically be a smart phone, a computer, a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer, or the like.
In some cases, the processing device may include a terminal device, and may further include a server, where the server may obtain candidates from the terminal device, so that the server may execute the candidate ranking method in speech provided by the embodiment of the present application, and return the adjusted candidate display position to the terminal device. The server may be an independent server or a cluster server. It should be noted that, for convenience of description, the candidate ranking method provided in the embodiment of the present application will be described later by taking the terminal device as an execution body as an example.
As shown in fig. 1, the candidate ranking method provided in the embodiment of the present application includes the following steps:
s101: and obtaining target candidates corresponding to the object to be associated according to the object to be associated.
The object to be associated may comprise different types of content, which may be user-defined, e.g. may be user-entered, which may comprise a string of characters, text content corresponding to speech, etc. For example, the character string may be selected by the user, for example, the last character string on the screen, or the character strings on the input focus side by adjusting the position of the input focus.
According to the input method, corresponding candidate items can be associated according to the object to be associated, the content included in the candidate items can be content corresponding to character strings input by a user, and the content included in the candidate items can also be content obtained by association according to the text character strings determined by the user.
In the embodiment of the application, the content included in the candidate item may include text forms such as characters, pinyin, strokes and the like of each country, and may also include forms such as pigment characters, expressions, pictures, dynamic pictures and the like.
The input method is characterized in that a plurality of candidate items can be associated according to the object to be associated, and the target candidate item is any one of the candidate items. The target candidate has a target type tag for identifying the type of content included in the target candidate.
The target type label of the target candidate item belongs to a type label, and can be preset or obtained according to content analysis included in the target candidate item. The target type tag is used for reflecting the type of content included in the target candidate, and may include, for example, a semantic type, a part-of-speech type, an emotion type, a form type, and the like.
For example, the semantic type may be a substance of the target candidate item including content semantically embodied; the part-of-speech type may be part-of-speech of content included in the target candidate, such as nouns, verbs, etc.; the emotion type can be emotion represented by contents included in the target candidate, such as low colloquial, happiness, anger and the like; the form type may be a literature form of the content included in the target candidate, such as poetry, colloquial, and the like.
The object type tag may embody types other than the above types, and the embodiments of the present application do not limit the types in other division manners.
It should be noted that there may be one or more target type tags of the target candidates. In the case of a plurality of candidates, the content included in the target candidate is characterized by being identified in different dividing angles.
For example, the object to be associated is "bed front bright moon light", the target candidate item is "two shoes on the ground", and the target candidate item can be provided with one target type tag of "colloquial" or two target type tags of "colloquial" and "low colloquial".
S102: and determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated.
The user corresponding to the object to be associated can be the user for determining the object to be associated, or the user logged in by the input method.
After determining the target type tag, candidate screen parameters for the user may also be determined. The determined candidate screen parameters are used to identify the likelihood that candidates with different types of labels are selected by the user for screen. That is, the candidate on-screen parameter shows the selection preference of the user on the candidates with different types of labels in the process of inputting through the candidate on-screen, for example, it can be clear that the user prefers the candidates with which type of label on-screen, does not prefer the candidates with which type of label on-screen, and so on.
The aforementioned different types of labels may constitute a set of type labels that can be added for the candidate, e.g. ten types of labels are included in the set of type labels, and the target type label of the target candidate belongs to one or more of these ten types.
After the relationship between the different types of labels and the target type of labels is clarified, determining a sub-screen parameter corresponding to the target candidate according to the candidate screen parameter, wherein the sub-screen parameter is used for identifying the possibility that the target candidate with the target type of labels is selected to be on screen by the user.
S103: and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Whether the target type label of the target candidate item accords with the screen preference of the user can be reflected through the sub-screen parameters. Therefore, according to the sub-screen parameters, the display positions of the corresponding target candidates can be adjusted in a targeted manner, for example, when the screen is more likely, the display positions of the target candidates are adjusted forwards, so that the user can preferentially see the candidates which better accord with the input habit of the user, the probability of the target candidates being on the screen is improved, and the input experience of the user is improved.
For example, user a is accustomed to having candidates for the type tag "colloquial". User b is accustomed to having candidates for the type label "poem" on the screen and dislikes having candidates for the type label "low custom" on the screen. In the embodiment of the application, if both the user a and the user b determine that the object to be associated is "bright moon in front of bed", for the user a, the two shoes on the ground "with the type labels" colloquial "and" low colloquial "will be displayed in the front position, for the user b, the two shoes on the ground" with the type labels "poetry" are suspected to be displayed in the front position, and the two shoes on the ground "with the type labels" colloquial "and" low colloquial "will be displayed in the rear position.
It can be seen that, according to the object to be associated determined by the user, a plurality of corresponding candidates can be determined, and the target candidate can be one of the candidates, where the target candidate has a target type tag for identifying the type of text content included in the target candidate. By pre-acquiring candidate screen parameters of the user, the possibility that candidates with different types of labels are selected by the user to screen can be identified, and sub-screen parameters can be determined according to the candidate screen parameters and target type labels of target candidates, wherein the sub-screen parameters can identify the possibility that target candidates with target type labels are selected by the user to screen. The candidates after the display position is adjusted by the sub-screen parameters can be more in line with the screen preference of the user, and the input experience of the user is improved.
It should be noted that, when the display position of the target candidate item is adjusted, the screen parameter of the candidate item of the user may be further introduced as an adjustment basis. Therefore, the display position of the target candidate item can be adjusted from the type of the text content included in the target candidate item, and the display position of the target candidate item can be adjusted in combination with the angle of the whole layer (namely the whole screen preference of the user), so that the accuracy of adjusting the display position of the target candidate item is further improved.
Thus, in one possible implementation, S103 may further perform the step of adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
By combining the screen parameters of the candidate items, the display positions of the target candidate items can be more comprehensively determined, and more accurate adjustment strategies can be provided when the number of the target type labels is more.
That is, when the number of the target type labels of the target candidate item is one, the degree that the target candidate item accords with the user screen preference can be intuitively reflected directly through the sub screen parameter. When the number of target type labels of the target candidate item is a plurality, the screen preference degree of the target candidate item conforming to the user can be determined by combining the screen preference parameters of the candidate item of the user.
When the display position of the target candidate item is adjusted by combining the candidate item on-screen parameter and the sub-on-screen parameter, an alternative implementation manner is provided, and how to adjust the display position of the target candidate item is determined by calculating the coincidence degree of the sub-on-screen parameter and the candidate item on-screen parameter.
In the present embodiment, the following is described.
S201: and determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter.
The candidate screen parameters may represent the likelihood that a user has a different type of label to be selected by the user for screen. The candidate on-screen parameters may be converted into a representation of the vector for subsequent computation. For example, the first probability distribution vector may be [ a1, a2, a3, … an ], where ai may represent the likelihood of being selected by the user to be on screen when the candidate has the i-th type tag of the n types of tags.
Similarly, the second probability distribution vector of the sub-screen parameter pair motion may be the above-mentioned form, but only the position corresponding to the target type tag in the vector has a valid value, and the position corresponding to the target type tag may not be set to 0, or other positions.
S202: vector distances of the first probability distribution vector and the second probability distribution vector are calculated. If the vector distance is less than the threshold, S203 is performed.
S203: and adjusting the display position of the target candidate item forward.
The embodiment of the application does not limit the distance calculation mode of the first probability distribution vector and the second probability distribution vector. The similarity degree of the sub-screen parameters and the candidate screen parameters can be shown through the vector distance obtained through calculation. The smaller the vector distance, the higher the degree of similarity.
Therefore, when the calculated vector distance is smaller than the threshold value, the possibility that the target candidate accords with the screen preference of the user is high, and the target candidate is preferentially displayed to the user by adjusting the display position of the target candidate forward, so that the input efficiency of the user is improved.
On the other hand, when the calculated adjacent distance is too large, the possibility that the target candidate item accords with the screen preference of the user is considered to be low, and the display position of the target candidate item can be adjusted backwards, so that the user is prevented from looking up the candidate item which does not accord with the screen preference of the user.
Next, embodiments of the present application will focus on the manner in which candidate on-screen parameters of a user are obtained.
The candidate on-screen parameters of a user are used to identify the likelihood that candidates with different types of labels are selected by the user for on-screen. That is, the candidate on-screen parameters can represent personalized information of the user in the process of selecting the candidate on-screen, and the candidate on-screen parameters of different users can be different.
The candidate on-screen parameters may be obtained according to the following manner:
s301: and acquiring candidate item data of the user.
The candidate data may include candidate operating parameters and type tags of the candidates during the user's screen-through of the candidates. The candidate data belongs to past history data of the user and can be continuously updated along with the input method of the user.
Wherein the candidate operating parameters are used to identify data related to the candidate for different types of labels during the user's on-screen process through the candidate, such data may reflect the user's on-screen preferences for candidates having different types of labels from different angles.
The candidate operational parameters may include, for example, any one or more of the following in combination:
the number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
Candidate presentation time.
Wherein the number of times a candidate with one type of tag is presented may represent the number of times a candidate with that type of tag is presented to the user, the greater the number of times the user has a greater likelihood of selecting a candidate with that type of tag to be on screen.
The number of candidate selections with a type of tab may represent the number of times a candidate with the type of tab is selected by a user to be on screen, the greater the number of times the type of tab is likely to conform to the user's on screen preference.
The modification information after the candidate item with one type of label is displayed on the screen can show whether the candidate item with the one type of label is modified, such as deleted or not after the candidate item with the one type of label is displayed on the screen by the user.
A candidate presentation time with one type of tag may represent a duration that a candidate with that type of tag is presented.
The above are just a few of the possible forms of candidate operating parameters, which may also include other possible forms.
S302: and determining candidate item screen parameters of the user according to the candidate item data.
From the above possible candidate operation parameters, it can be seen that, by using the candidate operation parameters, it is possible to show that when a user screens a candidate, a user has a screen preference on a candidate with which category label, and does not have a screen preference on a candidate with which category label. So that candidate screen parameters for different users can be determined.
In one possible implementation manner, the above steps may be implemented through a network model, so as to complete the determination of the candidate on-screen parameters.
Embodiments of the present application are further described below by way of specific examples.
(1) Data preparation:
candidates comprising text content, as well as type tags for those candidates, may be extracted or otherwise obtained by a machine. And may assign some initial model score to different types of labels prior to training by the model. For example, the "two shoes on the ground" candidate is that the type label is "low custom", and the initial model score of the type label is S.
(2) The data is deployed into a cloud or client model.
(3) A counter and a score indicator are set for each type of tag at the cloud or client for recording candidate operation parameters of candidates having the type of tag.
Taking a type label of 'low custom' as an example, a counter records the number of candidate item display times, the number of point selection times and the number of backspace times of the type label with 'low custom', and the counter is respectively added with 1 each time; a score counter, similar to a counter, but adds up the model score of the type tag each time.
(4) Training the data acquired by the counter and the score indicator into a personalized model for the user, and outputting probability distribution (i.e. candidate on-screen parameters) of the user on each type of label; this probability distribution is used to characterize the preference of the user's selection candidates and can be characterized by a quantity.
(5) When the user-determined object to be associated is obtained, determining a corresponding target type label according to any candidate item (namely a target candidate item) corresponding to the object to be associated, and determining sub-screen parameters corresponding to the target type label through probability distribution obtained through the steps.
It may be determined how to adjust the presentation position of the target candidate based on the determined sub-screen parameters or how to adjust the presentation position of the target candidate based on the vector distance of the sub-screen parameters and the candidate screen parameters.
Fig. 2 is a device structure diagram of a candidate display device provided in an embodiment of the present application, where the device includes an association unit 201, a determination unit 202, and an adjustment unit 203:
the associating unit 201 is configured to obtain, according to an object to be associated, a target candidate corresponding to the object to be associated, where the target candidate has a target type tag, and the target type tag is used to identify a type of content included in the target candidate;
The determining unit 202 is configured to determine, according to the target type tag and a candidate on-screen parameter of a user corresponding to the object to be associated, a sub-on-screen parameter of the target candidate relative to the user; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
the adjusting unit 203 is configured to adjust a display position of the target candidate according to the sub-screen parameter.
Optionally, the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the adjusting unit is further configured to:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
Optionally, the determining unit is further configured to obtain candidate on-screen parameters of the user according to the following manner:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
Optionally, the candidate operating parameters include any one or more of the following in combination:
the number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
candidate presentation time.
Optionally, the target candidate includes at least one target type tag.
The arrangement of each unit or module of the apparatus in this embodiment of the present application may be implemented by referring to the method shown in fig. 1, which is not described herein.
It can be seen that, according to the object to be associated, a plurality of corresponding candidates can be determined, and the target candidate can be one of the candidates, where the target candidate has a target type tag for identifying the type of the content included in the target candidate. By acquiring the candidate screen parameters of the user corresponding to the object to be associated in advance, the possibility that the candidate items with different types of labels are selected by the user can be identified, and the sub-screen parameters can be determined according to the candidate item screen parameters and the target type labels of the target candidate items, and the sub-screen parameters can identify the possibility that the target candidate items with the target type labels are selected by the user. Because the candidate on-screen parameters of a user can reflect the preference of the type label of the candidate when the user screens the candidate, the sub-on-screen parameters determined based on the candidate on-screen parameters can reflect the degree to which the type label of the corresponding candidate accords with the preference of the user. Therefore, according to the sub-screen parameters, the display positions of the corresponding target candidates can be adjusted in a targeted manner, for example, when the screen is more likely, the display positions of the target candidates are adjusted forwards, so that the user can preferentially see the candidates which better accord with the input habit of the user, the probability of the target candidates being on the screen is improved, and the input experience of the user is improved.
Referring to fig. 3, a block diagram of a candidate presentation device is shown according to an example embodiment. For example, apparatus 300 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 3, apparatus 300 may include one or more of the following components: a processing component 302, a memory 304, a power supply component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 312, a sensor component 314, and a communication component 316.
The processing component 302 generally controls overall operation of the apparatus 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 302 may include one or more processors 320 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 302 can include one or more modules that facilitate interactions between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
Memory 304 is configured to store various types of data to support operations at device 300. Examples of such data include instructions for any application or method operating on the device 300, contact data, phonebook data, messages, pictures, videos, and the like. The memory 304 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 306 provides power to the various components of the device 300. The power supply components 306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 300.
The multimedia component 308 includes a screen between the device 300 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 300 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 310 is configured to output and/or input audio signals. For example, the audio component 310 includes a Microphone (MIC) configured to receive external audio signals when the device 300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 further comprises a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 314 includes one or more sensors for providing status assessment of various aspects of the apparatus 300. For example, the sensor assembly 314 may detect the on/off state of the device 300, the relative positioning of the components, such as the display and keypad of the apparatus 300, the sensor assembly 314 may also detect a change in position of the apparatus 300 or one component of the apparatus 300, the presence or absence of user contact with the apparatus 300, the orientation or acceleration/deceleration of the apparatus 300, and a change in temperature of the apparatus 300. The sensor assembly 314 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate communication between the apparatus 300 and other devices, either wired or wireless. The device 300 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication part 314 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 314 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
Specifically, an embodiment of the present invention provides a candidate presentation device 300, including a memory 304, and one or more programs, wherein the one or more programs are stored in the memory 304, and configured to be executed by the one or more processors 320, the one or more programs include instructions for:
Obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
and adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
Determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 303, including instructions executable by processor 320 of apparatus 300 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A machine-readable medium, for example, the machine-readable medium may be a non-transitory computer-readable storage medium, which when executed by a processor of an apparatus (terminal or server) causes the apparatus to perform a candidate presentation method, the method comprising:
obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 400 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPU) 422 (e.g., one or more processors) and memory 432, one or more storage media 430 (e.g., one or more mass storage devices) storing applications 442 or data 444. Wherein memory 432 and storage medium 430 may be transitory or persistent storage. The program stored on the storage medium 430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 422 may be configured to communicate with the storage medium 430 and execute a series of instruction operations in the storage medium 430 on the server 400.
The server 400 may also include one or more power supplies 426, one or more wired or wireless network interfaces 440, one or more input/output interfaces 448, one or more keyboards 446, and/or one or more operating systems 441, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden. The foregoing is merely illustrative of the embodiments of this invention and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the invention, and it is intended to cover all modifications and variations as fall within the scope of the invention.

Claims (17)

1. A method of candidate presentation, the method comprising:
obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameters.
2. The method of claim 1, wherein adjusting the display position of the target candidate item according to the sub-screen parameter comprises:
and adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
3. The method of claim 2, wherein adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter comprises:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
4. The method of claim 1, wherein the user's candidate on-screen parameters are derived according to the following:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
5. The method of claim 4, wherein the candidate operating parameters comprise a combination of any one or more of:
the number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
Candidate presentation time.
6. The method of any of claims 1-5, wherein the target candidates comprise at least one target type tag.
7. A candidate presentation device, the device comprising an association unit, a determination unit and an adjustment unit:
the association unit is used for obtaining target candidates corresponding to the objects to be associated according to the objects to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
the determining unit is used for determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
the adjusting unit is used for adjusting the display position of the target candidate item according to the sub-screen-on parameters.
8. The apparatus of claim 7, wherein the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
9. The apparatus of claim 8, wherein the adjustment unit is further configured to:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
10. The apparatus of claim 7, wherein the determining unit is further configured to obtain candidate on-screen parameters of the user according to:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
and determining candidate item screen parameters of the user according to the candidate item data.
11. The apparatus of claim 10, wherein the candidate operating parameters comprise a combination of any one or more of:
The number of candidate item displays;
the number of candidate item clicks;
modification information of the candidate items after being on screen;
candidate presentation time.
12. The apparatus of any of claims 7-11, wherein the target candidates comprise at least one target type tag.
13. An apparatus for candidate presentation, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
obtaining target candidates corresponding to the object to be associated according to the object to be associated, wherein the target candidates are provided with target type labels, and the target type labels are used for identifying types of contents included in the target candidates;
determining sub-screen parameters of the target candidate item relative to the user according to the target type label and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate screen parameters are used for identifying the possibility that candidates with different types of labels are selected by the user to screen; the sub-screen parameter is used for identifying the possibility that the target candidate item with the target type label is selected to be screen-on by the user;
And adjusting the display position of the target candidate item according to the sub-screen-on parameters.
14. The apparatus of claim 13, wherein the processor further for executing the one or more programs comprises instructions for:
and adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
15. The apparatus of claim 14, wherein the processor further for executing the one or more programs comprises instructions for:
determining a first probability distribution vector corresponding to the candidate on-screen parameter, and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating vector distances of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, the display position of the target candidate item is adjusted forwards.
16. The apparatus of claim 13, wherein the processor further for executing the one or more programs comprises instructions for:
acquiring candidate item data of the user, wherein the candidate item data comprises candidate item operation parameters and candidate item type labels of the user in a process of passing through a candidate item screen;
And determining candidate item screen parameters of the user according to the candidate item data.
17. A machine readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the candidate presentation method of any of claims 1 to 6.
CN201910516909.5A 2019-06-14 2019-06-14 Candidate item display method and device Active CN112083811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910516909.5A CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910516909.5A CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Publications (2)

Publication Number Publication Date
CN112083811A CN112083811A (en) 2020-12-15
CN112083811B true CN112083811B (en) 2024-01-30

Family

ID=73734070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910516909.5A Active CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Country Status (1)

Country Link
CN (1) CN112083811B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016107344A1 (en) * 2014-12-30 2016-07-07 北京奇虎科技有限公司 Method and device for screening on-screen candidate items of input method
CN106774970A (en) * 2015-11-24 2017-05-31 北京搜狗科技发展有限公司 The method and apparatus being ranked up to the candidate item of input method
CN108304078A (en) * 2017-01-11 2018-07-20 北京搜狗科技发展有限公司 A kind of input method, device and electronic equipment
CN109799916A (en) * 2017-11-16 2019-05-24 北京搜狗科技发展有限公司 A kind of candidate item association method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016107344A1 (en) * 2014-12-30 2016-07-07 北京奇虎科技有限公司 Method and device for screening on-screen candidate items of input method
CN106774970A (en) * 2015-11-24 2017-05-31 北京搜狗科技发展有限公司 The method and apparatus being ranked up to the candidate item of input method
CN108304078A (en) * 2017-01-11 2018-07-20 北京搜狗科技发展有限公司 A kind of input method, device and electronic equipment
CN109799916A (en) * 2017-11-16 2019-05-24 北京搜狗科技发展有限公司 A kind of candidate item association method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于标签共现网络的用户聚合算法研究;王珉;王永滨;;计算机工程与应用(第02期);全文 *

Also Published As

Publication number Publication date
CN112083811A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
US10296201B2 (en) Method and apparatus for text selection
CN107102746B (en) Candidate word generation method and device and candidate word generation device
CN107315487B (en) Input processing method and device and electronic equipment
CN111339744A (en) Ticket information display method, device and storage medium
CN112508612B (en) Method for training advertisement creative generation model and generating advertisement creative and related device
CN112784151B (en) Method and related device for determining recommended information
CN109842688B (en) Content recommendation method and device, electronic equipment and storage medium
CN109901726B (en) Candidate word generation method and device and candidate word generation device
CN109799916B (en) Candidate item association method and device
CN109144286B (en) Input method and device
CN112083811B (en) Candidate item display method and device
CN111198620A (en) Method, device and equipment for presenting input candidate items
CN107515853B (en) Cell word bank pushing method and device
CN112052395B (en) Data processing method and device
US10871832B2 (en) Method and device for obtaining operation entry, and storage medium
CN111103986B (en) User word stock management method and device, and user word stock input method and device
CN109725736B (en) Candidate sorting method and device and electronic equipment
CN108983992B (en) Candidate item display method and device with punctuation marks
CN109388252B (en) Input method and device
CN112446720B (en) Advertisement display method and device
CN108874170B (en) Input method and device
CN112306251A (en) Input method, input device and input device
CN107122059B (en) Method and device for character input and electronic equipment
CN111427459B (en) Method and related device for optimizing input during user communication
CN109213799B (en) Recommendation method and device for cell word bank

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant