CN112256957A - Information sorting method and device, electronic equipment and storage medium - Google Patents

Information sorting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112256957A
CN112256957A CN202010997674.9A CN202010997674A CN112256957A CN 112256957 A CN112256957 A CN 112256957A CN 202010997674 A CN202010997674 A CN 202010997674A CN 112256957 A CN112256957 A CN 112256957A
Authority
CN
China
Prior art keywords
scene
information
predicted value
sub
sorted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010997674.9A
Other languages
Chinese (zh)
Inventor
韩力超
黄培浩
肖可依
周翔
曹雪智
陈�胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010997674.9A priority Critical patent/CN112256957A/en
Publication of CN112256957A publication Critical patent/CN112256957A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the application provides an information sorting method, an information sorting device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target search word input by a user; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and sequencing the plurality of information to be sequenced through the obtained prediction values of the plurality of information to be sequenced. Therefore, the multi-scene segmentation model can enable the general features of each scene to take effect on each scene, and the scene features of each scene take effect on the scene, so that the influence caused by the problems of limited data volume of small scenes, limited feature coverage rate and the like can be overcome.

Description

Information sorting method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to an information sorting method and apparatus, an electronic device, and a storage medium.
Background
Deep neural networks have been in place for a considerable period of time in various internet applications. By adjusting the model structure, performing characteristic mining on a targeted basis and adding the characteristic mining to a network for model training, the trained model also has a good effect on line. In the application of internet products, the deep learning model needs to deal with the simultaneous existence of multiple scenes, for example, in the search ranking, a search word input by a user usually covers many scenes, such as a food scene, a travel scene, and the like.
The characteristics of each scene have certain differences, and each scene also has the uniqueness of the scene, so that the data volume or the feature coverage of different scenes has great difference in magnitude during model training. Therefore, a significant challenge is encountered when applying deep neural networks to meet user requirements.
In the related art, a commonly adopted method is a multi-scenario unified model. Specifically, the method trains the same model by mining the characteristics of each scene dimension. However, due to the reasons of limited data of the small scenes, limited coverage rate of the features and the like, the features added to the small scenes hardly play a significant role in the unified model; and the parameter learning of the small scene is easily influenced by other large scenes, and the optimal state cannot be reached.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application show an information sorting method, an information sorting apparatus, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present application provides an information ranking method, where the method includes:
acquiring a target search word input by a user;
recalling a plurality of information to be sorted matched with the target search terms;
acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes;
inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes;
and sequencing the plurality of information to be sequenced through the obtained predicted values of the plurality of information to be sequenced.
Optionally, the pre-trained multi-scenario segmentation model is configured to:
inputting the common features included in the feature set of each piece of information to be sorted into the sharing sub-network to obtain a first predicted value;
inputting scene features included in the feature set of each piece of information to be sequenced into a corresponding scene sub-network to obtain a second predicted value;
and determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
Optionally, the determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value includes:
forming a scene predicted value vector by the first predicted value and the second predicted value;
obtaining a scene weight vector, wherein the scene weight vector is composed of a first weight corresponding to a sharing sub-network and a second weight corresponding to a scene sub-network;
and determining the predicted value of each piece of information to be sequenced based on the scene predicted value vector and the scene weight vector.
Optionally, the determining manner of the first weight corresponding to the sharing subnetwork and the second weight corresponding to the scene subnetwork includes:
acquiring historical data of a user;
determining the demand distribution probability of the user to each scene according to the user historical data;
and determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
Optionally, the inputting the scene features included in each feature set of the information to be ranked into the corresponding scene sub-network to obtain a second predicted value includes:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
In a second aspect, an embodiment of the present application provides an information sorting apparatus, where the apparatus includes:
the search word acquisition module is used for acquiring a target search word input by a user;
the information to be sorted recalling module is used for recalling a plurality of information to be sorted matched with the target search terms;
the characteristic set acquisition module is used for acquiring a characteristic set of each piece of information to be sorted, wherein one characteristic set of the information to be sorted comprises common characteristics of the information to be sorted in various scenes and scene characteristics of the information to be sorted in corresponding target scenes;
the system comprises a predicted value determining module, a judging module and a judging module, wherein the predicted value determining module is used for inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain the predicted value of each piece of information to be sequenced, the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes;
and the information sorting module is used for sorting the plurality of pieces of information to be sorted according to the obtained predicted values of the plurality of pieces of information to be sorted.
Optionally, the predicted value determining module includes:
the first predicted value obtaining unit is used for inputting the common features included in the feature set of each piece of information to be sorted into the sharing sub-network to obtain a first predicted value;
the second predicted value obtaining unit is used for inputting the scene features included in the feature set of each piece of information to be sequenced into the corresponding scene sub-network to obtain a second predicted value;
and the predicted value determining unit is used for determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
Optionally, the predicted value determining unit is specifically configured to:
forming a scene predicted value vector by the first predicted value and the second predicted value;
obtaining a scene weight vector, wherein the scene weight vector is composed of a first weight corresponding to a sharing sub-network and a second weight corresponding to a scene sub-network;
and determining the predicted value of each piece of information to be sequenced based on the scene predicted value vector and the scene weight vector.
Optionally, the predicted value determining unit is specifically configured to:
acquiring historical data of a user;
determining the demand distribution probability of the user to each scene according to the user historical data;
and determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
Optionally, the second predicted value obtaining unit is specifically configured to:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the information sorting method according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the information sorting method according to the first aspect.
According to the technical scheme provided by the embodiment of the application, the target search word input by the user is obtained; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and finally, sequencing the information to be sequenced through the obtained predicted values of the information to be sequenced.
Therefore, in the technical scheme provided by the embodiment of the application, the multi-scene segmentation model can enable the common features, namely the common features, of each scene to take effect on each scene by including one sharing sub-network, and the multi-scene segmentation model can enable the scene features of each scene to take effect on the scene by including a plurality of scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
Drawings
Fig. 1 is a flowchart illustrating steps of an information sorting method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating steps of an implementation manner in which a multi-scenario segmentation model obtains predicted values of a plurality of pieces of information to be ranked according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a multi-scene segmentation model provided in an embodiment of the present application;
fig. 4 is a block diagram illustrating a structure of an information sorting apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
At this stage, deep neural networks have fallen to various internet application fields for a considerable time. By adjusting the model structure, performing characteristic mining on a targeted basis and adding the characteristic mining to a network for model training, the trained model also has a good effect on line.
In the application of internet products, a deep learning model needs to deal with the simultaneous existence of multiple scenes, for example, in the search ranking, a search word input by a user usually covers many search scenes; in the field of personalized recommendation, a recommendation model is required to process various recommendation scenes such as gourmet food, travel and the like.
Due to the fact that service continuous states of all scenes in different companies are different, characteristics of all scenes have certain differences, and all scenes also have uniqueness, data volumes or feature coverage in different scenes are greatly different in magnitude during model training. Therefore, a significant challenge is encountered when applying deep neural networks to meet user requirements.
In the related technology, one of the commonly adopted schemes is a multi-scenario unified model, and the method uses the same model and trains the same model by excavating the characteristics of each scenario dimension.
Specifically, in the scheme, a multi-scene unified model is obtained by training a deep neural network through common features of all scenes. When a search word input by a user is acquired, recalling a plurality of information to be sorted matched with the search word, wherein the information to be sorted can be a merchant POI, a commodity SPU or a BRAND BRAND; when the information to be sorted is sorted, the features of the information to be sorted are extracted, the extracted features are input into the multi-scenario unified model, the predicted value of each information to be sorted is obtained, and the information to be sorted is sorted according to the size of the predicted value.
However, the multi-scenario unified model has at least two disadvantages:
firstly, due to the reasons that small scene data samples are limited, the scene feature coverage rate is limited and the like, the feature added to the small scene hardly plays a significant role in a unified model;
secondly, the parameter learning of the small scene is easily influenced by other large scenes, and the optimal state is not reached.
Therefore, the accuracy of the predicted value of the information to be sequenced, which is obtained by predicting the multi-scene unified model, is low, the information to be sequenced is sequenced according to the size of the predicted value, and the accuracy of the sequence of the obtained information to be sequenced is also low.
Therefore, the embodiment of the application provides an information sorting method, an information sorting device, electronic equipment and a storage medium.
In a first aspect, an information ranking method provided in an embodiment of the present application is first explained in detail.
It should be noted that an execution subject of the information sorting method provided in the embodiment of the present application may be an information sorting apparatus, where the information sorting apparatus may operate as an electronic device, and the electronic device may be a server.
As shown in fig. 1, an information sorting method provided in an embodiment of the present application may include the following steps:
s110, acquiring the target search term input by the user.
The target search term may be any search term input by the user, and the target search term is not specifically limited in the embodiments of the present application.
And S120, recalling a plurality of pieces of information to be ranked matched with the target search terms.
The electronic equipment can recall a plurality of pieces of information to be sorted matched with the target search terms when the target search terms input by the user are obtained. In practical application, the information to be sorted can be a merchant POI, a commodity SPU or a BRAND, the information to be sorted can be determined according to practical situations, and the information to be sorted is not specifically limited in the embodiment of the present application.
S130, acquiring a feature set of each piece of information to be sorted.
The feature set of the information to be ranked comprises common features of the information to be ranked under various scenes and scene features of the information to be ranked under corresponding target scenes.
Specifically, after a plurality of pieces of information to be ranked matched with the target search terms are obtained, common features of each piece of information to be ranked under various scenes, such as distance, sales volume or click rate, can be obtained; moreover, the scene characteristics of the information to be ranked in the corresponding target scene can be obtained, for example, the target scene corresponding to the information to be ranked is a travel scene, and then the scene characteristics can be the click rate of each city under different search intentions. The embodiment of the present application does not specifically limit the common features and the scene features.
And S140, inputting the feature set of each piece of information to be sorted into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sorted.
The multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, wherein each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes.
Specifically, after the feature set of each piece of information to be sorted is obtained, the feature set of each piece of information to be sorted can be input into a pre-trained multi-scene segmentation model; after receiving the feature set of each piece of information to be sorted, the multi-scene segmentation model can predict each piece of information to be sorted to obtain a predicted value of each piece of information to be sorted, so that in the subsequent step, the plurality of pieces of information to be sorted are sorted through the predicted values.
In addition, in order to accurately obtain the predicted value of each piece of information to be ranked, the multi-scene segmentation model provided in the embodiment of the present application may include a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes. Therefore, the multi-scene segmentation model provided by the embodiment of the application comprises a plurality of scene sub-networks, so that a plurality of scenes can be distinguished, and scene parameters can be properly isolated; moreover, the multi-scene segmentation model can enable common features, namely common features, of each scene to be effective on each scene through the sharing sub-network, and can enable the scene features of each scene to be effective on the scene through the multiple scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
In addition, the plurality of scene subnetworks can realize soft division of scenes, so that the problem that search requirements of users overlap among scenes can be solved, that is, when a target search word corresponds to a plurality of target scenes, the electronic equipment can accurately predict the predicted values of a plurality of pieces of information to be sorted, which are matched with the target search word.
For completeness and clarity of the scheme, how the multi-scene segmentation model obtains the predicted values of the plurality of features to be ranked will be explained in detail in the following embodiments.
S150, sorting the information to be sorted according to the obtained prediction values of the information to be sorted.
After the prediction values of the information to be sorted are obtained, the information to be sorted can be sorted through the obtained prediction values. For example, the plurality of pieces of information to be sorted may be sorted in an order of descending predicted values. And the electronic equipment is further facilitated to recommend the information to be sorted to the user, which is sorted in front, so that the search requirement of the user is met.
According to the technical scheme provided by the embodiment of the application, the target search word input by the user is obtained; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and finally, sequencing the information to be sequenced through the obtained predicted values of the information to be sequenced.
Therefore, in the technical scheme provided by the embodiment of the application, the multi-scene segmentation model can enable the common features, namely the common features, of each scene to take effect on each scene by including one sharing sub-network, and the multi-scene segmentation model can enable the scene features of each scene to take effect on the scene by including a plurality of scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
For completeness and clarity of the scheme, how the multi-scene segmentation model obtains the predicted values of the information to be ranked will be explained in detail in the following embodiments.
As shown in fig. 2, in one embodiment, the pre-trained multi-scene segmentation model is configured to perform the following steps:
s210, inputting the common features included in the feature set of each piece of information to be sorted into a sharing sub-network to obtain a first predicted value.
Specifically, after obtaining the feature set of each piece of information to be sorted, the common features, that is, the common features, included in the feature set of the information to be sorted may be input into the sharing sub-network, and the first predicted value may be output from the sharing sub-network.
S220, inputting the scene features included in the feature set of each piece of information to be sorted into a corresponding scene sub-network to obtain a second predicted value.
Specifically, after the feature set of each piece of information to be sorted is obtained, the scene features included in the feature set of the information to be sorted may be input into the corresponding scene sub-network, for example, the food scene features may be input into the corresponding food scene sub-network, and the travel scene features may be input into the corresponding travel scene sub-network. And after the scene features included in the feature set of the information to be sorted are input into the corresponding scene sub-network, outputting a second predicted value from the scene sub-network.
In order to make the second prediction value output from the scene subnetwork more accurate, as an implementation manner of the embodiment of the present disclosure, inputting the scene features included in each feature set of the information to be sorted into the corresponding scene subnetwork to obtain the second prediction value, including:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
In the implementation mode, the union set of the common features and the scene features included in the feature set of the information to be sorted is input into the corresponding scene sub-network, so that the types of the features input into the scene sub-network are more, the scene sub-network can obtain the second predicted value according to more comprehensive features, and the obtained second predicted value is more accurate.
And S230, determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
Specifically, for each piece of information to be sorted, after obtaining a first predicted value and a second predicted value corresponding to the information to be sorted, a final predicted value of the information to be sorted can be accurately obtained through the first predicted value and the second predicted value. In the subsequent step, the information to be sorted can be sorted through the final predicted value of the information to be sorted.
As an implementation manner of the embodiment of the present disclosure, determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value may include the following steps, which are step a1 to step a 3:
and a step a1, forming the first predicted value and the second predicted value into a scene predicted value vector.
Specifically, the first predicted value and the second predicted value may be spliced to obtain a scene prediction vector, and the scene weight vector is obtained in step a 2.
The scene weight vector is composed of a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network.
Specifically, the sharing sub-network corresponds to a weight, and each scene sub-network corresponds to a weight, for clarity of description of the scheme, the weight corresponding to the sharing sub-network may be referred to as a first weight, and the weight corresponding to each scene sub-network may be referred to as a second weight. And the first weight corresponding to the sharing sub-network and the second weight corresponding to the scene sub-network can be spliced to obtain a scene weight vector.
There are many ways to determine the second weight corresponding to the scene subnetwork that shares the first weight corresponding to the subnetwork.
In one embodiment, the determining manner of the first weight corresponding to the sharing subnet and the second weight corresponding to the scene subnet may include the following steps, which are respectively step a1 to step a 3:
step b1, obtaining user history data.
And b2, determining the demand distribution probability of the user to each scene according to the historical data of the user.
And b3, determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
Specifically, because application scenarios of internet products are various, sometimes it is difficult to distinguish the demand scenarios of users, or there may be a cross-overlapping relationship between scenarios, in this embodiment, the demand distribution probability of a user for each scenario may be used as the second weight of the scenario subnet corresponding to the scenario. Wherein, the demand distribution probability can be counted out based on the user historical data. And calculating the first weight corresponding to the sharing sub-network according to the demand distribution probability of the user for each scene.
Of course, in the above, only a specific implementation manner of determining the first weight corresponding to the sharing sub-network and the second weight corresponding to the scene sub-network is enumerated, in practical applications, a certain rule may be adopted to specify the first weight corresponding to the sharing sub-network and the second weight corresponding to the scene sub-network, and it is reasonable that the first weight corresponding to the sharing sub-network and the second weight corresponding to the scene sub-network may be obtained through other neural network model learning, which is not specifically limited in this embodiment of the present application.
Step a3, based on the scene predicted value vector and the scene weight vector, determining the predicted value of each information to be sorted.
Specifically, after the scene predicted value vector and the scene weight vector are obtained, the scene predicted value vector and the transformed scene weight vector may be multiplied to obtain a final model predicted value, that is, the predicted value of each piece of information to be sorted may be obtained.
Therefore, through the technical scheme of the embodiment, the demands of the users on each scene are distributed instead of hard division, the demand importance among the scenes is described by using a scene weight vector, and the problem that the demands of the users are overlapped among the scenes is solved.
For clarity of the description of the scheme, a detailed description will be given below of a specific embodiment of how the predicted value is obtained in the multi-scene cut scene by using a specific example.
As shown in fig. 3, the multi-scene segmentation model is composed of two network parts, one part is a bottomed shared network, i.e., the shared sub-network described in the above embodiment, to bottom up each scene prediction value. The other part is a sub-scene subnetwork corresponding to different scenes, that is, the scene subnetwork described in the above embodiment, each sub-scene subnetwork corresponds to one scene, and each sub-scene subnetwork predicts a prediction value of each piece of information to be sorted under the sub-scene subnetwork.
Prediction value P of bottom-pocketed shared network0And the score value predicted by each scene sub-network is P in the figure0And combining Pn and Pn to form a scene predicted value vector, and multiplying the scene predicted value vector by a scene weight vector to obtain a final predicted value.
Specifically, the specific process of obtaining the predicted value by the scene segmentation model is as follows:
1. in the shared network of the pocket bottom, the input layer is the common characteristic of all scenes, and the output is the predicted value P of the pocket bottom through the deep neural network0
2. In the sub-scene sub-networks, the input layer of each sub-scene sub-network may be a union of the common features of all scenes and the scene features specific to the scene, each sub-scene sub-network is a depth network, and the output layer of each sub-scene network is a predicted value corresponding to the scene, which is P respectively1,P2,…,Pn
3、P0And P1,P2,…,PnCombined into a scene predictor vector [ P ]0,P1,P2,…,Pn];
4. Given a scene weight vector [ W0,W1,…,Wn];
5. Scene predictor vector [ P [ ]0,P1,P2,…,Pn]And a scene weight vector [ W0,W1,…,Wn]And multiplying by the transpose to obtain the final predicted value.
Therefore, the scene segmentation model comprises the scene segmentation molecular network, and the segmentation of different scenes is embedded in the model structure without being split into different models. Therefore, at least the following advantageous effects can be achieved:
1. the iteration cost is low. Because only one model need be maintained instead of multiple models; and the scene parameters are properly isolated by the sub-scene sub-networks. And (3) the iteration of each scene only needs to iterate the sub-scene sub-network corresponding to the scene.
2. The common features are added at the bottom-of-pocket shared network input layer, so that the common features take effect on each scene. The scene features are added in the corresponding sub-scene sub-networks and only take effect on the scene data, so that the problems of limited small scene data volume, limited feature coverage rate and the like are solved.
3. The demands of the users on each scene are distributed instead of hard division, the demand importance among the scenes is described by using a scene weight vector, and the problem that the demands of the users are overlapped among the scenes is solved.
It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary and that no action is necessarily required in this application.
In a second aspect, an embodiment of the present application provides an information sorting apparatus, as shown in fig. 4, the apparatus includes:
a search word obtaining module 410, configured to obtain a target search word input by a user;
a to-be-ranked information recalling module 420, configured to recall a plurality of pieces of to-be-ranked information that are matched with the target search term;
a feature set obtaining module 430, configured to obtain a feature set of each piece of information to be sorted, where a feature set of one piece of information to be sorted includes common features of the information to be sorted in various scenes and scene features of the information to be sorted in a corresponding target scene;
the predicted value determining module 440 is configured to input the feature set of each piece of information to be ranked into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be ranked, where the multi-scene segmentation model includes a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes;
the information sorting module 450 is configured to sort the plurality of information to be sorted according to the obtained predicted values of the plurality of information to be sorted.
According to the technical scheme provided by the embodiment of the application, the target search word input by the user is obtained; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and finally, sequencing the information to be sequenced through the obtained predicted values of the information to be sequenced.
Therefore, in the technical scheme provided by the embodiment of the application, the multi-scene segmentation model can enable the common features, namely the common features, of each scene to take effect on each scene by including one sharing sub-network, and the multi-scene segmentation model can enable the scene features of each scene to take effect on the scene by including a plurality of scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
Optionally, the predicted value determining module includes:
the first predicted value obtaining unit is used for inputting the common features included in the feature set of each piece of information to be sorted into the sharing sub-network to obtain a first predicted value;
the second predicted value obtaining unit is used for inputting the scene features included in the feature set of each piece of information to be sequenced into the corresponding scene sub-network to obtain a second predicted value;
and the predicted value determining unit is used for determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
Optionally, the predicted value determining unit is specifically configured to:
forming a scene predicted value vector by the first predicted value and the second predicted value;
obtaining a scene weight vector, wherein the scene weight vector is composed of a first weight corresponding to a sharing sub-network and a second weight corresponding to a scene sub-network;
and determining the predicted value of each piece of information to be sequenced based on the scene predicted value vector and the scene weight vector.
Optionally, the predicted value determining unit is specifically configured to:
acquiring historical data of a user;
determining the demand distribution probability of the user to each scene according to the user historical data;
and determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
Optionally, the second predicted value obtaining unit is specifically configured to:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the information sorting method according to the first aspect when executing the program.
According to the technical scheme provided by the embodiment of the application, the target search word input by the user is obtained; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and finally, sequencing the information to be sequenced through the obtained predicted values of the information to be sequenced.
Therefore, in the technical scheme provided by the embodiment of the application, the multi-scene segmentation model can enable the common features, namely the common features, of each scene to take effect on each scene by including one sharing sub-network, and the multi-scene segmentation model can enable the scene features of each scene to take effect on the scene by including a plurality of scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the information sorting method according to the first aspect.
According to the technical scheme provided by the embodiment of the application, the target search word input by the user is obtained; recalling a plurality of information to be sorted matched with the target search terms; acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes; inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes; and finally, sequencing the information to be sequenced through the obtained predicted values of the information to be sequenced.
Therefore, in the technical scheme provided by the embodiment of the application, the multi-scene segmentation model can enable the common features, namely the common features, of each scene to take effect on each scene by including one sharing sub-network, and the multi-scene segmentation model can enable the scene features of each scene to take effect on the scene by including a plurality of scene sub-networks, so that the influence caused by the problems of limited small scene data volume, limited feature coverage rate and the like can be overcome.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is given to an information sorting method, an information sorting device, an electronic device, and a storage medium, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A method for ordering information, the method comprising:
acquiring a target search word input by a user;
recalling a plurality of information to be sorted matched with the target search terms;
acquiring a feature set of each piece of information to be sorted, wherein the feature set of one piece of information to be sorted comprises common features of the information to be sorted in various scenes and scene features of the information to be sorted in corresponding target scenes;
inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain a predicted value of each piece of information to be sequenced, wherein the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes;
and sequencing the plurality of information to be sequenced through the obtained predicted values of the plurality of information to be sequenced.
2. The method of claim 1, wherein the pre-trained multi-scenario segmentation model is configured to:
inputting the common features included in the feature set of each piece of information to be sorted into the sharing sub-network to obtain a first predicted value;
inputting scene features included in the feature set of each piece of information to be sequenced into a corresponding scene sub-network to obtain a second predicted value;
and determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
3. The method of claim 2, wherein determining the predicted value of each information to be sorted based on the first predicted value and the second predicted value comprises:
forming a scene predicted value vector by the first predicted value and the second predicted value;
obtaining a scene weight vector, wherein the scene weight vector is composed of a first weight corresponding to a sharing sub-network and a second weight corresponding to a scene sub-network;
and determining the predicted value of each piece of information to be sequenced based on the scene predicted value vector and the scene weight vector.
4. The method of claim 3, wherein determining the first weight corresponding to the sharing subnet and the second weight corresponding to the scene subnet comprises:
acquiring historical data of a user;
determining the demand distribution probability of the user to each scene according to the user historical data;
and determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
5. The method according to any one of claims 2 to 4, wherein the inputting the scene features included in each feature set of the information to be ranked into the corresponding scene sub-network to obtain the second predicted value comprises:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
6. An information ranking apparatus, characterized in that the apparatus comprises:
the search word acquisition module is used for acquiring a target search word input by a user;
the information to be sorted recalling module is used for recalling a plurality of information to be sorted matched with the target search terms;
the characteristic set acquisition module is used for acquiring a characteristic set of each piece of information to be sorted, wherein one characteristic set of the information to be sorted comprises common characteristics of the information to be sorted in various scenes and scene characteristics of the information to be sorted in corresponding target scenes;
the system comprises a predicted value determining module, a judging module and a judging module, wherein the predicted value determining module is used for inputting the feature set of each piece of information to be sequenced into a pre-trained multi-scene segmentation model to obtain the predicted value of each piece of information to be sequenced, the multi-scene segmentation model comprises a sharing sub-network and a plurality of scene sub-networks, each scene sub-network corresponds to one scene, and different scene sub-networks correspond to different scenes;
and the information sorting module is used for sorting the plurality of pieces of information to be sorted according to the obtained predicted values of the plurality of pieces of information to be sorted.
7. The apparatus of claim 6, wherein the predictor determination module comprises:
the first predicted value obtaining unit is used for inputting the common features included in the feature set of each piece of information to be sorted into the sharing sub-network to obtain a first predicted value;
the second predicted value obtaining unit is used for inputting the scene features included in the feature set of each piece of information to be sequenced into the corresponding scene sub-network to obtain a second predicted value;
and the predicted value determining unit is used for determining the predicted value of each piece of information to be sorted based on the first predicted value and the second predicted value.
8. The apparatus according to claim 7, wherein the predictor determining unit is specifically configured to:
forming a scene predicted value vector by the first predicted value and the second predicted value;
obtaining a scene weight vector, wherein the scene weight vector is composed of a first weight corresponding to a sharing sub-network and a second weight corresponding to a scene sub-network;
and determining the predicted value of each piece of information to be sequenced based on the scene predicted value vector and the scene weight vector.
9. The apparatus according to claim 8, wherein the prediction value determining unit is specifically configured to:
acquiring historical data of a user;
determining the demand distribution probability of the user to each scene according to the user historical data;
and determining a first weight corresponding to the sharing sub-network and a second weight corresponding to the scene sub-network based on the demand distribution probability of the user for each scene.
10. The apparatus according to any one of claims 7 to 9, wherein the second predictor obtaining unit is specifically configured to:
and inputting the common features and the scene features included in the feature set of each piece of information to be ranked into a corresponding scene sub-network to obtain a second predicted value.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the information ordering method according to any of claims 1 to 5 when executing the program.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the information ranking method according to any one of claims 1 to 5.
CN202010997674.9A 2020-09-21 2020-09-21 Information sorting method and device, electronic equipment and storage medium Pending CN112256957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010997674.9A CN112256957A (en) 2020-09-21 2020-09-21 Information sorting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010997674.9A CN112256957A (en) 2020-09-21 2020-09-21 Information sorting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112256957A true CN112256957A (en) 2021-01-22

Family

ID=74231468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010997674.9A Pending CN112256957A (en) 2020-09-21 2020-09-21 Information sorting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112256957A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672807A (en) * 2021-08-05 2021-11-19 杭州网易云音乐科技有限公司 Recommendation method, device, medium, device and computing equipment
CN117408296A (en) * 2023-12-14 2024-01-16 深圳须弥云图空间科技有限公司 Sequence recommendation depth ordering method and device for multitasking and multi-scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120150855A1 (en) * 2010-12-13 2012-06-14 Yahoo! Inc. Cross-market model adaptation with pairwise preference data
CN108416649A (en) * 2018-02-05 2018-08-17 北京三快在线科技有限公司 Search result ordering method, device, electronic equipment and storage medium
US20190057164A1 (en) * 2017-08-16 2019-02-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Search method and apparatus based on artificial intelligence
CN110222838A (en) * 2019-04-30 2019-09-10 北京三快在线科技有限公司 Deep neural network and its training method, device, electronic equipment and storage medium
CN111061946A (en) * 2019-11-15 2020-04-24 汉海信息技术(上海)有限公司 Scenario content recommendation method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120150855A1 (en) * 2010-12-13 2012-06-14 Yahoo! Inc. Cross-market model adaptation with pairwise preference data
US20190057164A1 (en) * 2017-08-16 2019-02-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Search method and apparatus based on artificial intelligence
CN108416649A (en) * 2018-02-05 2018-08-17 北京三快在线科技有限公司 Search result ordering method, device, electronic equipment and storage medium
CN110222838A (en) * 2019-04-30 2019-09-10 北京三快在线科技有限公司 Deep neural network and its training method, device, electronic equipment and storage medium
CN111061946A (en) * 2019-11-15 2020-04-24 汉海信息技术(上海)有限公司 Scenario content recommendation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张佳琳;: "基于多场景融合的分布式推荐模型", 四川大学学报(工程科学版), no. 03, 20 May 2015 (2015-05-20) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672807A (en) * 2021-08-05 2021-11-19 杭州网易云音乐科技有限公司 Recommendation method, device, medium, device and computing equipment
CN113672807B (en) * 2021-08-05 2024-03-05 杭州网易云音乐科技有限公司 Recommendation method, recommendation device, recommendation medium, recommendation device and computing equipment
CN117408296A (en) * 2023-12-14 2024-01-16 深圳须弥云图空间科技有限公司 Sequence recommendation depth ordering method and device for multitasking and multi-scene

Similar Documents

Publication Publication Date Title
CN108108821A (en) Model training method and device
CN111209476A (en) Recommendation method, model generation method, device, medium and equipment
CN105183772A (en) Release information click rate estimation method and apparatus
CN112989169B (en) Target object identification method, information recommendation method, device, equipment and medium
CN112085058A (en) Object combination recall method and device, electronic equipment and storage medium
CN111046188A (en) User preference degree determining method and device, electronic equipment and readable storage medium
CN112232546A (en) Recommendation probability estimation method and device, electronic equipment and storage medium
CN110096617B (en) Video classification method and device, electronic equipment and computer-readable storage medium
CN112182430A (en) Method and device for recommending places, electronic equipment and storage medium
CN112256957A (en) Information sorting method and device, electronic equipment and storage medium
CN112434072A (en) Searching method, searching device, electronic equipment and storage medium
CN112214677A (en) Interest point recommendation method and device, electronic equipment and storage medium
CN110222838B (en) Document sorting method and device, electronic equipment and storage medium
CN110297967B (en) Method, device and equipment for determining interest points and computer readable storage medium
CN112182281B (en) Audio recommendation method, device and storage medium
CN112269943B (en) Information recommendation system and method
CN110880117A (en) False service identification method, device, equipment and storage medium
KR20210061739A (en) Apparatus and method for providing user-customized recommending-information, and computer-readable recording media recorded a program for executing it
CN114092162B (en) Recommendation quality determination method, and training method and device of recommendation quality determination model
CN114363671A (en) Multimedia resource pushing method, model training method, device and storage medium
CN116415063A (en) Cloud service recommendation method and device
CN115858911A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN113128597A (en) Method and device for extracting user behavior characteristics and classifying and predicting user behavior characteristics
CN109299321B (en) Method and device for recommending songs
CN112328835A (en) Method and device for generating vector representation of object, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination