CN112083811A - Candidate item display method and device - Google Patents

Candidate item display method and device Download PDF

Info

Publication number
CN112083811A
CN112083811A CN201910516909.5A CN201910516909A CN112083811A CN 112083811 A CN112083811 A CN 112083811A CN 201910516909 A CN201910516909 A CN 201910516909A CN 112083811 A CN112083811 A CN 112083811A
Authority
CN
China
Prior art keywords
candidate item
screen
target
user
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910516909.5A
Other languages
Chinese (zh)
Other versions
CN112083811B (en
Inventor
崔欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201910516909.5A priority Critical patent/CN112083811B/en
Publication of CN112083811A publication Critical patent/CN112083811A/en
Application granted granted Critical
Publication of CN112083811B publication Critical patent/CN112083811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a candidate item display method and device, wherein a plurality of corresponding candidate items are determined to comprise target candidate items according to an object to be associated, and the target type label is used for identifying the type of content included in the target candidate items. The candidate item on-screen parameter of the user corresponding to the object to be associated may identify a possibility that a candidate item having a different type of tag is selected on-screen by the user, and a sub-on-screen parameter may be determined according to the candidate item on-screen parameter and a target type tag of the target candidate item, and the sub-on-screen parameter may identify a possibility that a target candidate item having a target type tag is selected on-screen by the user. According to the sub-screen-on parameters, the display positions of the corresponding target candidate items can be adjusted in a targeted manner, for example, the display positions of the target candidate items are adjusted forward when the screen-on possibility is high, so that the user can preferentially view the candidate items which are more in line with the input habit of the user, and the screen-on probability of the target candidate items is improved.

Description

Candidate item display method and device
Technical Field
The present application relates to the field of input methods, and in particular, to a candidate item display method and apparatus.
Background
The input method can be deployed in an intelligent terminal, and is convenient for a user to input characters. Aiming at the content such as the character string input by the user or the displayed character string, the input method can provide corresponding association candidate items comprising the content, and the user can directly display the content in the candidate items by selecting the association candidate items meeting the input purpose, so that the input efficiency is improved.
The candidate items are presented with a certain rank, and in the conventional manner, the presentation positions of the candidate items are generally determined according to rules such as an object to be associated, a context and the like, and the presentation positions of the candidate items which are more in line with the rules are adjusted forward, so that a user can preferentially view the candidate items.
However, the current sorting mode of the candidates is difficult to meet the personalized requirements of the user.
Disclosure of Invention
In order to solve the technical problem, the application provides a candidate item display method and device, and the probability that a target candidate item is displayed on a screen is improved.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a candidate item display method, where the method includes:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
Optionally, the adjusting the display position of the target candidate item according to the sub-screen-loading parameter includes:
and adjusting the display position of the target candidate item according to the candidate item screen-up parameter and the sub screen-up parameter.
Optionally, the adjusting the display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter includes:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
Optionally, the candidate item screen-on parameter of the user is obtained according to the following manner:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
Optionally, the candidate operating parameters include any one or more of the following:
showing the candidate items;
the candidate item click frequency;
modifying information after the candidate item is displayed on a screen;
the candidate item shows the time.
Optionally, the target candidate item includes at least one target type tag.
In a second aspect, an embodiment of the present application provides a candidate item presentation apparatus, where the apparatus includes an association unit, a determination unit, and an adjustment unit:
the association unit is used for obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
the determining unit is used for determining sub-screen parameters of the target candidate item relative to the user according to the target type tag and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and the adjusting unit is used for adjusting the display position of the target candidate item according to the sub-screen parameters.
Optionally, the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the adjusting unit is further configured to:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
Optionally, the determining unit is further configured to obtain a candidate screen entry parameter of the user according to the following manner:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
Optionally, the candidate operating parameters include any one or more of the following:
showing the candidate items;
the candidate item click frequency;
modifying information after the candidate item is displayed on a screen;
the candidate item shows the time.
Optionally, the target candidate item includes at least one target type tag.
In a third aspect, an apparatus for candidate item presentation is provided in embodiments herein and includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured for execution by the one or more processors to include instructions for:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
and adjusting the display position of the target candidate item according to the candidate item screen-up parameter and the sub screen-up parameter.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
In a fourth aspect, embodiments of the present application provide a machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform the candidate item presentation method as described in the first aspect.
According to the technical scheme, a plurality of corresponding candidate items can be determined according to the object to be associated, the target candidate item can be one of the candidate items, and the target candidate item is provided with a target type label and used for identifying the type of the content included in the target candidate item. The method comprises the steps of obtaining candidate item screen-up parameters of a user corresponding to an object to be associated in advance, identifying the possibility that candidate items with different types of labels are selected by the user to be screened, determining sub screen-up parameters according to the candidate item screen-up parameters and target type labels of target candidate items, and identifying the possibility that the target candidate items with the target type labels are selected by the user to be screened by the sub screen-up parameters. Because the candidate item on-screen parameter of a user can reflect the preference of the user for the type tag of the candidate item when the user is on-screen the candidate item, the sub-on-screen parameter determined based on the candidate item on-screen parameter can reflect the degree that the type tag of the corresponding candidate item conforms to the preference of the user. Therefore, according to the sub-screen-on parameters, the display position of the corresponding target candidate item can be adjusted in a targeted manner, for example, the display position of the target candidate item is adjusted forward when the screen-on possibility is high, so that the user can preferentially view the candidate item which is more in line with the input habit of the user, the probability that the target candidate item is screened is improved, and the input experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a method for displaying candidate items according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a candidate display apparatus according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a candidate display apparatus according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a server provided in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
In a conventional manner, the display positions of the candidates are generally determined according to rules such as an object to be associated, a context, and the like, and the display positions of the candidates that better meet the rules are adjusted forward, so that the user can view such candidates preferentially. However, the presentation rule does not consider personalized input preferences of different users, so that actually presented candidates are difficult to meet the expectations of different users, and may cause a bad input experience for some users.
Therefore, the embodiment of the application provides a candidate item presentation method, and a type label is set for a candidate item through the type of content included in the candidate item. For the candidate items obtained by the object to be associated, the sub-screen-up parameters of different candidate items can be determined according to the candidate item screen-up parameters for identifying the possibility that the candidate items of different types of labels are selected by the user corresponding to the object to be associated. And adjusting the sorting position of the corresponding candidate item according to the sub-on-screen parameter.
Because the candidate item on-screen parameter of a user can reflect the preference of the user for the type tag of the candidate item when the user is on-screen the candidate item, the sub-on-screen parameter determined based on the candidate item on-screen parameter can reflect the degree that the type tag of the corresponding candidate item conforms to the preference of the user. Therefore, according to the screen-up possibility embodied by the sub-screen-up parameters, the display position of the target candidate item can be adjusted in a targeted manner, for example, the display position of the target candidate item is adjusted forward when the screen-up possibility is high, so that the user can preferentially view the candidate item which is more in line with the input habit of the user, the screen-up probability of the target candidate item is improved, and the input experience of the user is improved.
The candidate item ranking method provided by the embodiment of the application can be applied to various processing devices which can deploy the input method, and the processing devices can be terminal devices or servers. When the processing device is a terminal device, the terminal device may be a smart phone, a computer, a Personal Digital Assistant (PDA), a tablet computer, or the like.
In some cases, in an application scenario, the processing device may include a terminal device, and may further include a server, where the server may obtain the candidate from the terminal device, so that the server performs the candidate ranking method in speech provided in the embodiment of the present application, and returns the adjusted candidate display position to the terminal device. The server may be an independent server or a cluster server. For convenience of introduction, the candidate item ranking method provided by the embodiment of the present application will be described later by taking the terminal device as an execution subject.
As shown in fig. 1, a candidate item ranking method provided in the embodiment of the present application includes the following steps:
s101: and obtaining a target candidate item corresponding to the object to be associated according to the object to be associated.
The object to be associated may include different types of content, and the object to be associated may be user-determined, for example, may be user-input, which may include character strings, text content corresponding to voice, and the like. For example, the user may select a character string, such as a last character string displayed on the screen, or a character string, such as a text string, located on the side of the input focus by adjusting the position of the input focus.
The input method can associate a corresponding candidate item according to an object to be associated, the content included in the candidate item can be the content corresponding to a character string input by a user, and the content included in the candidate item can also be the content obtained according to the character string association determined by the user.
In the embodiment of the present application, the content included in the candidate item may include text forms such as characters, pinyin, and strokes, and may also include forms such as characters, expressions, pictures, and dynamic images.
The input method can associate a plurality of candidates according to the object to be associated, and the target candidate is any one of the candidates. The target candidate item has a target type tag for identifying the type of content included by the target candidate item.
The target type tag of the target candidate item belongs to the type tag, and may be preset, or obtained by analyzing the content included in the target candidate item. The target type tag is used for embodying the type embodied by the content included in the target candidate item, and may include a semantic type, a part of speech type, an emotion type, a form type and the like.
For example, the semantic type may be a subject matter that the content included in the target candidate item is semantically embodied; the part-of-speech type can be the part-of-speech of the content included in the target candidate item, such as nouns, verbs and the like; the emotion type can be the emotion reflected by the content included in the target candidate item, such as low colloquial, happy happiness, anger and the like; the formal type may be a literary form of the content included by the target candidate item, such as poetry, colloquial, and the like.
The target type tag may represent the type in other division manners besides the type described above, and this is not limited in this embodiment of the application.
It should be noted that there may be one or more target type tags of the target candidate. In the case of a plurality of target candidates, the content included in the target candidate has the characteristic of being identifiable in different division angles.
For example, the object to be associated is "light before bed", the target candidate is "two pairs of shoes on the ground", and the target candidate may have one target type label of "colloquial" or may have two target type labels of "colloquial" and "low colloquial".
S102: and determining sub-screen-on parameters of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameters of the user corresponding to the object to be associated.
The user corresponding to the object to be associated may be the user who determines the object to be associated, or may be the user who logs in the input method at present, or the like.
After the target type tag is determined, a candidate item on-screen parameter for the user can also be determined. The determined candidate on-screen parameter is used to identify the likelihood that a candidate with a different type of tag is selected on-screen by the user. That is to say, the candidate item on-screen parameter represents the selection preference of the user for candidate items with different types of tags in the input process of the user on-screen through the candidate item, for example, it can be determined that the user prefers what type of tag candidate item is on the on-screen, and dislikes what type of tag candidate item is on the on-screen.
The aforementioned different types of labels may constitute a set of type labels that can be added to the candidate item, for example, the set of type labels includes ten types of labels, and the target type label of the target candidate item belongs to one or more of the ten types.
After the relationship between the different types of tags and the target type of tags is clarified, a sub-screen-up parameter corresponding to the target candidate item can be determined according to the screen-up parameter of the candidate item, wherein the sub-screen-up parameter is used for identifying the possibility that the target candidate item with the target type of tags is selected by the user to be on the screen.
S103: and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
Whether the target type label of the target candidate item meets the screen-up preference of the user or not can be reflected through the sub screen-up parameter. Therefore, according to the sub-screen-on parameters, the display position of the corresponding target candidate item can be adjusted in a targeted manner, for example, the display position of the target candidate item is adjusted forward when the screen-on possibility is high, so that the user can preferentially view the candidate item which is more in line with the input habit of the user, the probability that the target candidate item is screened is improved, and the input experience of the user is improved.
For example, user a is accustomed to having a candidate for the type tag "colloquial" on the screen. User b is used to have a candidate item with the type label "poetry" on the screen and dislikes a candidate item with the type label "vulgar" on the screen. In the embodiment of the present application, if both the user a and the user b determine that the object to be associated is "bright moon before bed", for the user a, the candidates "two shoes on the ground" having the genre labels "colloquial" and "vulgar" will be displayed at the front position, for the user b, the candidate "cream on the ground" having the genre label "poetry" will be displayed at the front position, and the candidates "two shoes on the ground" having the genre labels "colloquial" and "vulgar" will be displayed at the rear position.
Therefore, according to the object to be associated determined by the user, a plurality of corresponding candidate items can be determined, the target candidate item can be one of the candidate items, and the target candidate item has a target type tag for identifying the type of the text content included in the target candidate item. The candidate item on-screen parameter of the user is obtained in advance, the possibility that the candidate item with different types of labels is selected on the screen by the user can be identified, the sub-on-screen parameter can be determined according to the candidate item on-screen parameter and the target type label of the target candidate item, and the sub-on-screen parameter can identify the possibility that the target candidate item with the target type label is selected on the screen by the user. The candidate items after the display positions are adjusted by the sub-screen-on parameters can better accord with the screen-on preference of the user, and the input experience of the user is improved.
It should be noted that, when the display position of the target candidate item is adjusted, the candidate item screen-up parameter of the user may be further introduced as an adjustment basis. Therefore, the display position of the target candidate item can be adjusted according to the type of the text content included by the target candidate item, and the display position of the target candidate item can be adjusted in combination with the angle of the whole layer (namely the whole screen-on preference of the user), so that the adjustment accuracy of the display position of the target candidate item is further improved.
Thus, in a possible implementation manner, step S103 may further perform a step of adjusting the display position of the target candidate item according to the candidate item screen-up parameter and the sub screen-up parameter.
By combining the candidate item screen-on parameters, the display position of the target candidate item can be determined more comprehensively, and a more accurate adjustment strategy can be provided when the number of the target type tags is large.
That is to say, when the number of the target type tags of the target candidate item is one, the degree that the target candidate item meets the user's on-screen preference can be embodied more intuitively directly through the sub-on-screen parameter. When the target type tag number of the target candidate item is multiple, the candidate item on-screen parameter of the user can be combined to determine the degree of the target candidate item meeting the on-screen preference of the user.
When the display position of the target candidate item is adjusted by combining the candidate item upper screen parameter and the sub upper screen parameter, the embodiment of the application provides an optional implementation mode, and how to adjust the display position of the target candidate item is determined by calculating the coincidence degree of the sub upper screen parameter and the candidate item upper screen parameter.
In the present embodiment:
s201: and determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter.
Since the candidate on-screen parameter may reflect the likelihood that a user has a candidate with a different type of tag selected on-screen by the user. Therefore, for subsequent calculation, the candidate on-screen parameters can be converted into the expression form of the vector. For example, the first probability distribution vector may be [ a1, a2, a3, … an ], where ai may embody the likelihood of being selected by the user to go on screen when the candidate has the ith one of the n type labels.
Similarly, the second probability distribution vector of the sub-screen parameters to the motion may be in the above form, but only the position corresponding to the target type tag in the vector has a valid value, and the position not corresponding to the target type tag may be set to 0, or other.
S202: calculating a vector distance of the first probability distribution vector and the second probability distribution vector. If the vector distance is smaller than the threshold, S203 is executed.
S203: and adjusting the display position of the target candidate item forwards.
The embodiment of the present application does not limit the distance calculation method of the first probability distribution vector and the second probability distribution vector. The similarity degree of the sub-on-screen parameters and the candidate on-screen parameters can be embodied through the calculated vector distance. The smaller the vector distance, the higher the degree of similarity.
Therefore, when the calculated vector distance is smaller than the threshold value, the probability that the target candidate item meets the screen-up preference of the user is high, and the target candidate item can be preferentially displayed to the user by adjusting the display position of the target candidate item forward, so that the input efficiency of the user is improved.
On the other hand, when the calculated adjacent distance is too large, the probability that the target candidate item meets the on-screen preference of the user is low, the display position of the target candidate item can be adjusted backwards, and the user is prevented from seeing the candidate item which does not meet the on-screen preference of the user.
Next, the embodiment of the present application will focus on a candidate item on-screen parameter obtaining manner of the user.
A user's candidate on-screen parameter is used to identify the likelihood that candidates with different types of tags are selected on-screen by the user. That is to say, the candidate item screen-on parameter can embody personalized information of the user in the process of selecting the candidate item screen-on, and the candidate item screen-on parameters of different users may be different.
The candidate item on-screen parameter can be obtained according to the following modes:
s301: and acquiring candidate data of the user.
The candidate data may include a candidate operational parameter and a type tag of the candidate during the user's screen-up through the candidate. The candidate data belongs to the past history data of the user and can be continuously updated along with the use of the input method of the user.
The candidate item operation parameter is used for identifying data related to the candidate items aiming at different types of labels in the process that the user passes through the candidate item on the screen, and the data can reflect the on-screen preference of the user on the candidate items with different types of labels from different angles.
The candidate operating parameters may include, for example, any one or a combination of:
showing the candidate items;
the candidate item click frequency;
modifying information after the candidate item is displayed on a screen;
the candidate item shows the time.
The number of times of showing the candidate items with the type of the labels can reflect the number of times of showing the candidate items with the type of the labels to the user, and the more the number of times, the higher the possibility that the user selects the candidate items with the type of the labels to be displayed on the screen.
The number of times a candidate item with a type of tag is clicked may reflect the number of times the candidate item with the type of tag is selected on the screen by the user, and the greater the number of times, the greater the likelihood that the type of tag will meet the user's on-screen preference.
The modification information after the candidate item with one type of tag is displayed on the screen may show whether the candidate item with the one type of tag is selected by the user to be displayed on the screen, and the modification information is modified, such as deletion.
The candidate presentation time with one type of label may reflect the length of time that a candidate with that type of label is presented.
The above are just a few possible forms of candidate operational parameters, which may also include other possible forms.
S302: and determining candidate screen-on parameters of the user according to the candidate data.
As can be seen from the above possible candidate item operation parameters, when a user selects a candidate item on the screen, the candidate item with which category tag has the on-screen preference, and the candidate item with which category tag does not have the on-screen preference, can be embodied through the candidate item operation parameters. Thereby, candidate on-screen parameters for different users can be determined.
In a possible implementation manner, the above steps may be implemented by a network model, and the determination of the candidate item on-screen parameter is completed.
The embodiments of the present application will be further explained below by specific examples.
(1) Preparing data:
candidates including text content, and type tags for those candidates, may be extracted by machine or otherwise obtained. And may assign certain initial model scores to different types of labels before being trained by the model. For example, the candidate of "two pairs of shoes on the ground" has a type label of "low colloquial", and the initial model score of the type label is S.
(2) The data is deployed to a cloud or client model.
(3) And setting a counter and a scorer for each type of tag at the cloud or the client, wherein the counter and the scorer are used for recording candidate operating parameters of the candidate with the type of tag.
Taking the label of the type of 'low custom' as an example, the counter records the showing times, the click times and the backspace times of the candidate items with the label of the type of 'low custom', and the counter is respectively added with 1 in each operation; scorers, similar to counters, but accumulate model scores for type labels each time.
(4) Training data acquired by the counter and the scorer to a personalized model for a user, and outputting probability distribution (namely candidate item screen-up parameters) of the user on each type of label; the probability distribution is used to characterize the user's preference for candidates for selection, and may be characterized by a quantity.
(5) When the object to be associated determined by the user is obtained, a target type label corresponding to any one candidate item (namely a target candidate item) corresponding to the object to be associated is determined, and the sub-on-screen parameter corresponding to the target type label is determined according to the probability distribution obtained in the previous step.
And determining how to adjust the display position of the target candidate item according to the determined sub-on-screen parameter or determining how to adjust the display position of the target candidate item according to the vector distance between the sub-on-screen parameter and the candidate item on-screen parameter.
Fig. 2 is a block diagram of an apparatus for displaying candidates according to an embodiment of the present application, where the apparatus includes an association unit 201, a determination unit 202, and an adjustment unit 203:
the association unit 201 is configured to obtain a target candidate item corresponding to an object to be associated according to the object to be associated, where the target candidate item has a target type tag, and the target type tag is used to identify a type of content included in the target candidate item;
the determining unit 202 is configured to determine a sub-screen parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
the adjusting unit 203 is configured to adjust a display position of the target candidate item according to the sub-screen parameters.
Optionally, the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
Optionally, the adjusting unit is further configured to:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
Optionally, the determining unit is further configured to obtain a candidate screen entry parameter of the user according to the following manner:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
Optionally, the candidate operating parameters include any one or more of the following:
showing the candidate items;
the candidate item click frequency;
modifying information after the candidate item is displayed on a screen;
the candidate item shows the time.
Optionally, the target candidate item includes at least one target type tag.
The setting of each unit or module of the apparatus according to the embodiment of the present application can be implemented by referring to the method shown in fig. 1, which is not described herein again.
Therefore, according to the object to be associated, a plurality of corresponding candidate items can be determined, the target candidate item can be one of the candidate items, and the target candidate item has a target type tag for identifying the type of the content included in the target candidate item. The method comprises the steps of obtaining candidate item screen-up parameters of a user corresponding to an object to be associated in advance, identifying the possibility that candidate items with different types of labels are selected by the user to be screened, determining sub screen-up parameters according to the candidate item screen-up parameters and target type labels of target candidate items, and identifying the possibility that the target candidate items with the target type labels are selected by the user to be screened by the sub screen-up parameters. Because the candidate item on-screen parameter of a user can reflect the preference of the user for the type tag of the candidate item when the user is on-screen the candidate item, the sub-on-screen parameter determined based on the candidate item on-screen parameter can reflect the degree that the type tag of the corresponding candidate item conforms to the preference of the user. Therefore, according to the sub-screen-on parameters, the display position of the corresponding target candidate item can be adjusted in a targeted manner, for example, the display position of the target candidate item is adjusted forward when the screen-on possibility is high, so that the user can preferentially view the candidate item which is more in line with the input habit of the user, the probability that the target candidate item is screened is improved, and the input experience of the user is improved.
Referring to fig. 3, a block diagram of a candidate presentation apparatus is shown in accordance with an exemplary embodiment. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 3, the apparatus 300 may include one or more of the following components: processing component 302, memory 304, power component 306, multimedia component 308, audio component 310, input/output (I/O) interface 312, sensor component 314, and communication component 316.
The processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 can include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the device 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 306 provides power to the various components of the device 300. The power components 306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for the device 300. For example, sensor assembly 314 may detect an open/closed state of device 300, the relative positioning of components, such as a display and keypad of apparatus 300, the change in position of apparatus 300 or a component of apparatus 300, the presence or absence of user contact with apparatus 300, the orientation or acceleration/deceleration of apparatus 300, and the change in temperature of apparatus 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices. The device 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 314 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 314 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
Specifically, the embodiment of the present invention provides a candidate presenting apparatus 300, which comprises a memory 304, and one or more programs, wherein the one or more programs are stored in the memory 304, and are configured to be executed by one or more processors 320, and the one or more programs include instructions for:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
and adjusting the display position of the target candidate item according to the candidate item screen-up parameter and the sub screen-up parameter.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
Further, the processor 320 is specifically configured to execute the one or more programs including instructions for:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 303 comprising instructions, executable by the processor 320 of the apparatus 300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A machine-readable medium, which may be, for example, a non-transitory computer-readable storage medium, having instructions thereon which, when executed by a processor of an appliance (terminal or server), enable the appliance to perform a method of candidate presentation, the method comprising:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
Fig. 4 is a schematic structural diagram of a server in an embodiment of the present invention. The server 400 may vary significantly due to configuration or performance, and may include one or more Central Processing Units (CPUs) 422 (e.g., one or more processors) and memory 432, one or more storage media 430 (e.g., one or more mass storage devices) storing applications 442 or data 444. Wherein the memory 432 and storage medium 430 may be transient or persistent storage. The program stored on the storage medium 430 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 422 may be arranged to communicate with the storage medium 430, and execute a series of instruction operations in the storage medium 430 on the server 400.
The server 400 may also include one or more power supplies 426, one or more wired or wireless network interfaces 440, one or more input-output interfaces 448, one or more keyboards 446, and/or one or more operating systems 441, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is only limited by the appended claims
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort. The foregoing is directed to embodiments of the present invention, and it is understood that various modifications and improvements can be made by those skilled in the art without departing from the spirit of the invention.

Claims (10)

1. A candidate item presentation method, the method comprising:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
2. The method of claim 1, wherein said adjusting the presentation position of the target candidate item according to the sub-on-screen parameter comprises:
and adjusting the display position of the target candidate item according to the candidate item screen-up parameter and the sub screen-up parameter.
3. The method of claim 2, wherein said adjusting the presentation position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter comprises:
determining a first probability distribution vector corresponding to the candidate on-screen parameter and a second probability distribution vector corresponding to the sub-on-screen parameter;
calculating a vector distance of the first probability distribution vector and the second probability distribution vector;
and if the vector distance is smaller than a threshold value, adjusting the display position of the target candidate item forwards.
4. The method of claim 1, wherein the candidate on-screen parameters of the user are derived according to the following:
acquiring candidate item data of the user, wherein the candidate item data comprises a candidate item operating parameter and a type tag of a candidate item in the process that the user passes through a candidate item screen;
and determining candidate screen-on parameters of the user according to the candidate data.
5. The method of claim 4, wherein the candidate operating parameters comprise any one or more of the following in combination:
showing the candidate items;
the candidate item click frequency;
modifying information after the candidate item is displayed on a screen;
the candidate item shows the time.
6. The method of any of claims 1-5, wherein the target candidate comprises at least one target type tag.
7. A candidate presentation apparatus, characterized in that the apparatus comprises an association unit, a determination unit and an adjustment unit:
the association unit is used for obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
the determining unit is used for determining sub-screen parameters of the target candidate item relative to the user according to the target type tag and the candidate item screen parameters of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and the adjusting unit is used for adjusting the display position of the target candidate item according to the sub-screen parameters.
8. The apparatus of claim 7, wherein the determining unit is further configured to adjust a display position of the target candidate item according to the candidate item on-screen parameter and the sub-on-screen parameter.
9. An apparatus for candidate presentation, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for:
obtaining a target candidate item corresponding to an object to be associated according to the object to be associated, wherein the target candidate item is provided with a target type label, and the target type label is used for identifying the type of content included in the target candidate item;
determining a sub-screen-on parameter of the target candidate item relative to the user according to the target type tag and the candidate item screen-on parameter of the user corresponding to the object to be associated; the candidate item on-screen parameter is used to identify the likelihood that a candidate item with a different type of tag is selected on-screen by the user; the sub-on-screen parameter is used to identify the likelihood that the target candidate item with the target type tag is selected on-screen by the user;
and adjusting the display position of the target candidate item according to the sub-screen-on parameter.
10. A machine readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform the candidate presentation method of any of claims 1 to 6.
CN201910516909.5A 2019-06-14 2019-06-14 Candidate item display method and device Active CN112083811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910516909.5A CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910516909.5A CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Publications (2)

Publication Number Publication Date
CN112083811A true CN112083811A (en) 2020-12-15
CN112083811B CN112083811B (en) 2024-01-30

Family

ID=73734070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910516909.5A Active CN112083811B (en) 2019-06-14 2019-06-14 Candidate item display method and device

Country Status (1)

Country Link
CN (1) CN112083811B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016107344A1 (en) * 2014-12-30 2016-07-07 北京奇虎科技有限公司 Method and device for screening on-screen candidate items of input method
CN106774970A (en) * 2015-11-24 2017-05-31 北京搜狗科技发展有限公司 The method and apparatus being ranked up to the candidate item of input method
CN108304078A (en) * 2017-01-11 2018-07-20 北京搜狗科技发展有限公司 A kind of input method, device and electronic equipment
CN109799916A (en) * 2017-11-16 2019-05-24 北京搜狗科技发展有限公司 A kind of candidate item association method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016107344A1 (en) * 2014-12-30 2016-07-07 北京奇虎科技有限公司 Method and device for screening on-screen candidate items of input method
CN106774970A (en) * 2015-11-24 2017-05-31 北京搜狗科技发展有限公司 The method and apparatus being ranked up to the candidate item of input method
CN108304078A (en) * 2017-01-11 2018-07-20 北京搜狗科技发展有限公司 A kind of input method, device and electronic equipment
CN109799916A (en) * 2017-11-16 2019-05-24 北京搜狗科技发展有限公司 A kind of candidate item association method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王珉;王永滨;: "基于标签共现网络的用户聚合算法研究", 计算机工程与应用, no. 02 *

Also Published As

Publication number Publication date
CN112083811B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US10296201B2 (en) Method and apparatus for text selection
CN107992604B (en) Task item distribution method and related device
CN112131410A (en) Multimedia resource display method, device, system and storage medium
CN107315487B (en) Input processing method and device and electronic equipment
CN107784045B (en) Quick reply method and device for quick reply
CN109918565B (en) Processing method and device for search data and electronic equipment
CN107291260B (en) Information input method and device for inputting information
CN111339744A (en) Ticket information display method, device and storage medium
CN111382339A (en) Search processing method and device and search processing device
CN111708943A (en) Search result display method and device and search result display device
CN112131466A (en) Group display method, device, system and storage medium
CN112148923A (en) Search result sorting method, sorting model generation method, device and equipment
CN112000266B (en) Page display method and device, electronic equipment and storage medium
CN111198620A (en) Method, device and equipment for presenting input candidate items
CN112784151B (en) Method and related device for determining recommended information
CN112115341A (en) Content display method, device, terminal, server, system and storage medium
CN109725736B (en) Candidate sorting method and device and electronic equipment
CN109144286B (en) Input method and device
CN112083811B (en) Candidate item display method and device
CN113965792A (en) Video display method and device, electronic equipment and readable storage medium
CN108983992B (en) Candidate item display method and device with punctuation marks
CN112363631A (en) Input method, input device and input device
CN111368161A (en) Search intention recognition method and intention recognition model training method and device
CN110929122A (en) Data processing method and device and data processing device
CN112446720B (en) Advertisement display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant