CN116401420A - Searching method, device, medium and equipment based on multi-mode feature fusion - Google Patents

Searching method, device, medium and equipment based on multi-mode feature fusion Download PDF

Info

Publication number
CN116401420A
CN116401420A CN202310011323.XA CN202310011323A CN116401420A CN 116401420 A CN116401420 A CN 116401420A CN 202310011323 A CN202310011323 A CN 202310011323A CN 116401420 A CN116401420 A CN 116401420A
Authority
CN
China
Prior art keywords
feature
features
display
determining
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310011323.XA
Other languages
Chinese (zh)
Inventor
庞鸿亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lazas Network Technology Shanghai Co Ltd
Original Assignee
Lazas Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lazas Network Technology Shanghai Co Ltd filed Critical Lazas Network Technology Shanghai Co Ltd
Priority to CN202310011323.XA priority Critical patent/CN116401420A/en
Publication of CN116401420A publication Critical patent/CN116401420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a searching method, device, medium and equipment based on multi-mode feature fusion, wherein the method comprises the following steps: determining search key information received by a search box; sending the search key information to a server side, enabling the server side to recall multi-item target information based on the search key information, and carrying out CTR scoring on the multi-item target information based on a click through rate CTR estimation model; determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature; and receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score. According to the scheme provided by the embodiment of the application, the exposure click rate of the target information can be improved when the target information is displayed in a sequence mode according to the CTR estimation model.

Description

Searching method, device, medium and equipment based on multi-mode feature fusion
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method, an apparatus, a medium, and a device for searching based on multi-modal feature fusion.
Background
When using an APP or applet, users often use search functions to find targets, such as searching for target stores, target food, target services, and so forth. The CTR (Click-Through-Rate) predictive model is used as a loop of the whole search link, and plays an important role in final sequencing display results. Generally, a search scene adopts different ordering and displaying strategies according to different search intents: for brand intention searching, the reverse shop distance can achieve the effect, and for general search intention (mainly content intention and address intention), the best matching among multiple dimensions of people (users), goods (targets), sites (addresses), searches (searching modes) and the like is captured by means of a pre-estimated model. Aiming at the general search intention, how to show the search results meeting the individuation of the user in the search list and improve the exposure click rate is a technical problem which needs to be solved by the person skilled in the art.
Disclosure of Invention
In view of this, the present application provides a search method, apparatus, medium and electronic device based on multi-modal feature fusion, and is mainly aimed at training a predictive model based on fusion of multiple display features, so as to improve the search exposure click rate.
According to one aspect of the present application, there is provided a search method based on multi-modal feature fusion, for a terminal, including: determining search key information received by a search box; sending the search key information to a server side, enabling the server side to recall multi-item target information based on the search key information, and carrying out CTR scoring on the multi-item target information based on a click through rate CTR estimation model; the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature; and receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
According to one aspect of the present application, a search method based on multi-modal feature fusion is provided, for a server, including: receiving search key information sent by a terminal; recalling multi-item label information based on the search key information, and carrying out CTR scoring on the multi-item label information based on a click through rate CTR estimation model; the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature; and returning the multi-item label information to the terminal, wherein the multi-item label information is ordered based on the CTR score.
According to one aspect of the application, a training method of a predictive model based on multi-modal feature fusion is provided, including: determining and acquiring input features of a pre-estimated model: original features, fusion features, posterior features; training a distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model; the method comprises the steps of determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
According to one aspect of the present application, there is provided a search device based on multi-modal feature fusion, for a terminal, including: a search key information determining unit for determining the search key information received by the search box; the transmission unit is used for transmitting the search key information to a server side, so that the server side recalls the multi-item target information based on the search key information and carries out CTR scoring on the multi-item target information based on a click through rate CTR estimation model; and is used for receiving the multi-item label information returned by the server; the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature; and the display unit is used for displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
According to one aspect of the present application, there is provided a search device based on multi-modal feature fusion, for a server, including: the transmission unit is used for receiving the search key information sent by the terminal and returning multi-item label information to the terminal; the recommendation unit comprises a recall subunit, a click through rate CTR estimation subunit and a mechanism strategy subunit, wherein the recall subunit is used for recalling multi-item label information based on the search key information; the CTR estimation subunit is used for carrying out CTR scoring on the multi-item target information based on a CTR estimation model, so that the terminal carries out sequencing display on the multi-item target information based on the CTR scoring; the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
According to one aspect of the present application, there is provided a training device for a predictive model based on multi-modal feature fusion, including: the feature determining unit is used for determining and acquiring input features of the pre-estimated model: original features, fusion features, posterior features; the training unit is used for training the distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model; the feature determining unit further comprises a feature fusion processing subunit, wherein the feature fusion processing subunit is used for determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
According to an aspect of the present application, there is provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when run.
According to one aspect of the present application there is provided an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the above method.
By means of the technical scheme, the searching method, the searching device, the searching medium and the searching equipment based on multi-mode feature fusion are used for fusing multiple display features and training a CTR estimation model based on the fused features. For example, feature extraction and feature alignment are performed on different multi-modal features such as titles, head charts, business circles and the like in a shop search scene, attention behaviors with different features are displayed by using an Attention mechanism modeling user and a search query, and the features after Attention are used as fusion features to be added into an existing CTR model for learning and training. According to the scheme provided by the embodiment of the application, the exposure click rate of the target information can be improved when the target information is displayed in a sequence mode according to the CTR estimation model.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic diagram of an implementation scenario of a search method based on multi-modal feature fusion according to an embodiment of the present application;
fig. 2 shows a flowchart of a searching method based on multi-mode feature fusion for a terminal according to an embodiment of the present application;
fig. 3 shows a flowchart of a searching method based on multi-mode feature fusion for a server provided in an embodiment of the present application;
FIG. 4 shows a flowchart of a training method of a predictive model based on multi-modal feature fusion according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a training method of a predictive model based on multi-modal feature fusion according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a search device based on multi-mode feature fusion for a terminal according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a search device for a server based on multi-mode feature fusion according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of a training device of a predictive model based on multi-modal feature fusion according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
Referring to fig. 1, a schematic implementation scenario of a search method based on multi-modal feature fusion is provided in an embodiment of the present application. The scenario illustrates a terminal, which may be a user terminal, for example, a terminal device such as a mobile phone, a computer, a smart watch, etc., and a server, which refers to a network-side device, such as a server, that searches in response to a terminal search request and returns a search result.
At the terminal, a search page of a certain APP is schematically illustrated. In the APP page, a search box is generally disposed at the top (or at other positions), and a user may provide search key information by means of chinese character input, picture insertion, voice input, etc., for example, the user inputs "parent-child photography" key information. After the APP acquires the search key information input by the user, the search key information is transmitted to a network side server through a network, and the server returns a search result based on a recommendation system.
In this scenario, a recommendation system is illustrated in the server, where the recommendation system includes a recall module, a fine-ranking (CTR estimation) module, and a mechanism policy module. For searching key information, the recall module aims at all items (target information), and the fine-ranking module aims at recalling the output items. The fine ranking module scores items based on a CTR estimation model, and ranks the search result items based on CTR scoring, and can further combine user experience and consider indexes such as diversity. The CTR predictive model can be widely applied to the fields of personalized recommendation, information retrieval, online advertisement and the like, and is used for learning and predicting feedback of users, wherein the feedback of the users mainly comprises clicking, collecting, purchasing and the like. As in this example, the server side feeds back target information (e.g., a plurality of pieces of parent-child photography store information) personalized for the user based on the search key information "parent-child photography", and the terminal APP side can display the plurality of pieces of parent-child photography store information in a sorted manner by means of a graphic information list.
The inventor of the application researches and discovers that how to reasonably use the display characteristics through a predictive model is a great challenge when the display characteristics have obvious graphic fusion multi-modal characteristics as can be known from the specific form shown by the search results. In the embodiment of the application, the display features refer to features which reflect or influence the display form of the search result, such as a head chart feature, a title feature, a text feature, discrete type features (including a display category feature, a display business circle feature, a public praise list feature and the like, for example), continuous type features (including a comment number feature, a price feature, a star grade grading feature and the like, for example). Based on data statistics and analytical findings of CTR (click through rate), CVR (click conversion rate) for individual presentation features, user click decisions for exposure target information depend in part on these presentation features as well as search Query and personal preferences. Therefore, the embodiments of the present application propose to model the Attention distribution of the Query vector (determining the Query vector with Query and personal preferences, etc.) to these presentation features based on the Attention (Attention) mechanism, so as to fuse the presentation features, and train the CTR estimation model based on the fusion features.
Referring to fig. 2, a flowchart of a search method based on multi-mode feature fusion for a terminal according to an embodiment of the present application is shown.
S201: and determining the search key information received by the search box.
S202: and sending the search key information to the server side, so that the server side recalls the multi-item target information based on the search key information, and carries out CTR scoring on the multi-item target information based on a CTR estimation model.
In the embodiment of the application, the recalled items can be subjected to CTR scoring based on a CTR estimation model, and finally the items are ranked based on the CTR scoring. The CTR predictive model can be widely applied to the fields of personalized recommendation, information retrieval, online advertisement and the like, and is used for learning and predicting feedback of users, wherein the feedback of the users mainly comprises clicking, collecting, purchasing and the like. The feature data of the CTR estimate model often includes a plurality of features. In the embodiment of the application, the CTR network is trained based on the original CTR characteristics, the fusion characteristics and the posterior characteristics to obtain the estimated model.
The acquisition mode of the fusion characteristics comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain a fused feature.
In embodiments of the present application, the presentation features may include a head map feature, a title feature, a text feature, a discrete class feature, or a continuous class feature.
First, the presentation features are preprocessed (vectorized), e.g., preprocessing various presentation features may include: performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction processing on the initial head map feature vectors to obtain head map feature vectors; aiming at the title feature, feature vectorization is carried out through a word embedding vector method, so as to obtain a title feature vector; aiming at text features, feature migration is carried out through a neural network model to obtain text feature vectors; mapping the discrete class features to a dense vector space to obtain discrete class feature vectors; and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors.
The pre-processed feature vectors are then assigned attention weights based on an attention mechanism. In the process of processing a large amount of input information by the neural network model, only some key input information can be selected for processing by using a attention mechanism, so that the efficiency of the neural network is improved. The calculation of the attention weight (attention value) can be divided into two steps: (1) calculating an attention profile over all input information; (2) A weighted average of the input information is calculated from the attention profile. In the embodiment of the present application, it is necessary to calculate the attention distribution of each item of presentation feature vector as input information, and then calculate a weighted average of presentation features from the attention distribution. In determining the attention profile, one such scenario is given: with the input information vector X as an information store, now given a query vector q for finding and selecting certain information in X, it is necessary to know the index location of the selected information. Instead of picking only one piece of information from the stored pieces of information, a soft selection mechanism is adopted, and some information is extracted from all pieces of information uniformly, but the most relevant information is extracted more. It will thus be appreciated that in determining the attention profile of each presentation feature vector, this needs to be done with reference to the query vector. In the embodiment of the application, one or more of search features (Query), user side features (user behavior statistical features) and content features (search result content features) are used as Query features, vectorization processing is carried out on the Query features to obtain a Query vector q, then attention distribution of each item of display feature vector X is established based on the Query vector q, and finally weighted average is carried out based on the attention distribution to obtain the attention weight of each item of display features.
And finally, according to the attention weights of the display characteristics, fusing the display characteristics to obtain fusion characteristics.
The fused features (fusion features) of the multi-modal features after fusion enter an existing CTR network together with other original CTR features and posterior features to train, a CTR estimation model can adopt a structure of a distillation network, constraint is carried out through distillation Loss minimization, information captured by a Teacher network which is complex in structure and uses posterior features is migrated to a Student network, and CTR estimation scoring is carried out on line by using the Student network.
S203: and receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on the CTR score.
For example, the search key information is "parent-child photography", and the server side feeds back target information (for example, a plurality of pieces of parent-child photography store information) personalized for the user based on the search key information "parent-child photography", and can display the plurality of pieces of parent-child photography store information by means of a graphic information list at the terminal APP side.
Referring to fig. 3, a flowchart of a search method based on multi-mode feature fusion for a server provided in an embodiment of the present application is shown.
S301: receiving search key information sent by a terminal;
s302: recalling multi-item label information based on the search key information, and carrying out CTR scoring on the multi-item label information based on a click through rate CTR estimation model;
the prediction model is obtained based on the training of the original features, the fusion features and the posterior features.
In the embodiment of the application, the recalled items can be subjected to CTR scoring based on a CTR estimation model, and finally the items are ranked based on the CTR scoring. The CTR predictive model can be widely applied to the fields of personalized recommendation, information retrieval, online advertisement and the like, and is used for learning and predicting feedback of users, wherein the feedback of the users mainly comprises clicking, collecting, purchasing and the like. The feature data of the CTR estimate model often includes a plurality of features.
In the embodiment of the application, the CTR network is trained based on the original CTR characteristics, the fusion characteristics and the posterior characteristics to obtain the estimated model. The original features are features which can be obtained when the online model is estimated, such as store category, store word segmentation, click rate of a user on a store and the like; the posterior feature is a feature which cannot be obtained when online prediction is performed, and is not known when video click is performed, for example, the feature of video watching time is a feature which exists after the occurrence of a predicted event, but is an important feature, and the posterior feature can be modeled and utilized in practice by using a feature distillation mode.
The acquisition mode of the fusion characteristics comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain a fused feature.
In embodiments of the present application, the presentation features may include a head map feature, a title feature, a text feature, a discrete class feature, or a continuous class feature.
First, the presentation features are preprocessed (vectorized), e.g., preprocessing various presentation features may include: performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction processing on the initial head map feature vectors to obtain head map feature vectors; aiming at the title feature, feature vectorization is carried out through a word embedding vector method, so as to obtain a title feature vector; aiming at text features, feature migration is carried out through a neural network model to obtain text feature vectors; mapping the discrete class features to a dense vector space to obtain discrete class feature vectors; and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors.
The pre-processed feature vectors are then assigned attention weights based on an attention mechanism. In the process of processing a large amount of input information by the neural network model, only some key input information can be selected for processing by using a attention mechanism, so that the efficiency of the neural network is improved. The calculation of the attention weight (attention value) can be divided into two steps: (1) calculating an attention profile over all input information; (2) A weighted average of the input information is calculated from the attention profile. In the embodiment of the present application, it is necessary to calculate the attention distribution of each item of presentation feature vector as input information, and then calculate a weighted average of presentation features from the attention distribution. In determining the attention profile, one such scenario is given: with the input information vector X as an information store, now given a query vector q for finding and selecting certain information in X, it is necessary to know the index location of the selected information. Instead of picking only one piece of information from the stored pieces of information, a soft selection mechanism is adopted, and some information is extracted from all pieces of information uniformly, but the most relevant information is extracted more. It will thus be appreciated that in determining the attention profile of each presentation feature vector, this needs to be done with reference to the query vector. In the embodiment of the application, one or more of search features (Query), user side features (user behavior statistical features) and content features (search result content features) are used as Query features, vectorization processing is carried out on the Query features to obtain a Query vector q, then attention distribution of each item of display feature vector X is established based on the Query vector q, and finally weighted average is carried out based on the attention distribution to obtain the attention weight of each item of display features.
And finally, according to the attention weights of the display characteristics, fusing the display characteristics to obtain fusion characteristics.
The fused features (fusion features) of the multi-modal features after fusion enter an existing CTR network together with other original CTR features and posterior features to train, a CTR estimation model can adopt a structure of a distillation network, constraint is carried out through distillation Loss minimization, information captured by a Teacher network which is complex in structure and uses posterior features is migrated to a Student network, and CTR estimation scoring is carried out on line by using the Student network.
S303: and returning the multi-item label information to the terminal, wherein the multi-item label information is ordered based on the CTR score.
Therefore, in the embodiment of the application, the CTR estimation model is trained by fusing the multiple display features and based on the fused features. For example, feature extraction and feature alignment are performed on different multi-modal features such as titles, head charts, business circles and the like in a shop search scene, attention behaviors with different features are displayed by using an Attention mechanism modeling user and a search query, and the features after Attention are used as fusion features to be added into an existing CTR model for learning and training. According to the scheme provided by the embodiment of the application, the exposure click rate of the target information can be improved when the target information is displayed in a sequence mode according to the CTR estimation model.
Referring to fig. 4, a flowchart of a training method of a predictive model based on multi-modal feature fusion according to an embodiment of the present application is shown.
S401: determining and acquiring input features of a pre-estimated model: original features, fusion features, posterior features.
In one implementation, at least two presentation features are determined, an attention weight of each presentation feature is determined according to an attention mechanism, and the at least two presentation features are fused according to the attention weight of each presentation feature to obtain a fused feature.
In one implementation, the process of determining the attention weight for each presentation feature according to the attention mechanism may include:
1. vectorizing the display characteristics to obtain display characteristic vectors;
2. determining at least one of search features, user side features and content features as query features, and carrying out vectorization processing on the query features to obtain query vectors;
3. and according to an attention mechanism, establishing attention weight distribution of the query vector to each item of display feature vector so as to determine the attention weight of each item of display feature.
In an embodiment of the present application, the display features include: at least two of a head map feature, a title feature, a text feature, a discrete class feature, and a continuous class feature. The process of vectorizing the display characteristic to obtain the display characteristic vector can be as follows: performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction processing on the initial head map feature vectors to obtain head map feature vectors; aiming at the title feature, feature vectorization is carried out through a word embedding vector method, so as to obtain a title feature vector; aiming at text features, feature migration is carried out through a neural network model to obtain text feature vectors; mapping the discrete class features to a dense vector space to obtain discrete class feature vectors; and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors.
S402: based on the original features, the fusion features and the posterior features, training the distillation network structure to obtain a click through rate CTR estimation model.
In one implementation, the information captured by the teacher network is migrated to the student network by a distillation loss minimization constraint mechanism, and CTR predictive scoring is performed using the student network.
Referring to fig. 5, a schematic diagram of a training method of a predictive model based on multi-modal feature fusion according to an embodiment of the present application is shown.
Fig. 5 shows:
the preprocessing module is used for preprocessing (vectorizing) each display characteristic;
the fusion module is used for fusing the preprocessed display characteristics based on an attention mechanism to obtain fusion characteristics;
and (3) the CTR estimation model of the distillation structure transfers information captured by a Teacher network which is complex in structure and uses posterior features to a Student network through distillation Loss minimization constraint, and CTR estimation scoring is carried out on line by using the Student network.
With reference to fig. 5, the following mainly describes aspects of feature preprocessing and feature vector fusion.
(1) For the head map feature, feature extraction can be performed through a convolutional neural network model to obtain an initial head map feature vector, and dimension reduction processing is performed on the initial head map feature vector to obtain the head map feature vector. For example, the VGG16 model may be used as a feature extractor, the 4096-dimensional vector output by the VGG full connection layer is used as a header feature, and the 4096-dimensional vector output by the VGG16 is reduced to 16 dimensions by means of the two-layer NN network in the multi-modal fusion and then enters the Attention module. By the method, the image features can be effectively extracted, and migration can be performed for the CTR scene, so that updating is completed in the training process.
(2) For the processing of the title and the text characteristics, the characteristic vectorization of the text can be completed by adopting a word embedding mode. For example, 768-dimensional vectors output by the pooling layer of the preprocessing module are used as the vectorized representation of the header feature. For text features, a two-layer NN network may be used to accomplish migration training of text features.
(3) For discrete class features, discrete class feature vectors may be obtained by mapping to a dense vector space. Discrete ID class features, such as presentation categories, presentation business circles, public praise ranking features, etc., map to a dense vector space.
(4) And carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretization processing characteristics to obtain continuous class characteristic vectors. The continuous class features include continuous cell (unit) presentation features such as comment count, average price, star rating score, etc. For example, a discretization barrel-divided EMBedding mode can be adopted to enter the multi-mode fusion module. The discretization is because depth models are generally sensitive to the distribution of continuous features, and for some high frequency data, unreasonable continuous feature scaling can lead to serious model training problems, by which model sensitivity to data distribution can be reduced.
After the display features are preprocessed, a fusion module is entered, and feature fusion is performed based on an attention mechanism.
Firstly, aiming at each display characteristic after preprocessing, after aligning in a unified dimension, the method enters an Attention network characterized by Query, context and User to finish weight calculation; and then, fusing the display characteristics based on the attention weight of each display characteristic to obtain a fused characteristic.
The calculation of the attention weight (attention value) can be divided into two steps: (1) calculating an attention profile over all input information; (2) A weighted average of the input information is calculated from the attention profile. In the embodiment of the present application, it is necessary to calculate the attention distribution of each item of presentation feature vector as input information, and then calculate a weighted average of presentation features from the attention distribution. In determining the attention profile, one such scenario is given: with the input information vector X as an information store, now given a query vector q for finding and selecting certain information in X, it is necessary to know the index location of the selected information. Instead of picking only one piece of information from the stored pieces of information, a soft selection mechanism is adopted, and some information is extracted from all pieces of information uniformly, but the most relevant information is extracted more. It will thus be appreciated that in determining the attention profile of each presentation feature vector, this needs to be done with reference to the query vector. In the embodiment of the application, one or more of search features (Query), user side features (user behavior statistical features) and content features (Context features) are used as Query features, vectorization processing is performed on the Query features to obtain a Query vector q, then attention distribution of each item of display feature vector X is established based on the Query vector q, and finally weighted average is performed based on the attention distribution to obtain the attention weight of each item of display features.
The features after the multi-modal feature is fused, namely fusion features (fusion features), are input into the existing CTR network to train together with other original CTR features and posterior features. The CTR model can adopt a distillation network structure, information captured by a Teacher network which is complex in structure and uses posterior features is migrated to a Student network through distillation Loss minimization constraint, and CTR estimation scoring is carried out on line by using the Student network.
For example, under the scenes of store search and the like, multi-mode features such as a head graph, a title, store ratings, a business district and the like are fused and expressed, wherein display features such as pictures and texts are embedded into vector expressions, and different Attention distribution on the vector features such as the head graph, the texts, the store ratings, the business district and the like is realized by modeling Query, context and User side features by means of an Attention mechanism, so that the fusion expression of the display features of stores in a search page is realized. The solution is used in a model for estimating the CTR of the store search in a modularized mode, and has obvious effect on improving CTR and CVR indexes.
Referring to fig. 6, a schematic structural diagram of a search device based on multi-mode feature fusion for a terminal according to an embodiment of the present application is shown.
The searching device based on the multi-mode feature fusion comprises:
a search key information determining unit 601, configured to determine search key information received by the search box;
the transmission unit 602 is configured to send the search key information to a server, so that the server recalls multiple item label information based on the search key information, and performs CTR scoring on the multiple item label information based on a click through rate CTR estimation model; and is used for receiving the multi-item label information returned by the server;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
and a display unit 603, configured to display the multi-item label information, where the multi-item label information is ordered based on a CTR score.
Referring to fig. 7, a schematic structural diagram of a search device for a server based on multi-mode feature fusion according to an embodiment of the present application is shown. The searching device based on the multi-mode feature fusion comprises:
A transmission unit 701, configured to receive search key information sent by a terminal, and return multi-item label information to the terminal;
the recommendation unit 702 comprises a recall subunit 7021, a click through rate CTR estimation subunit 7022 and a mechanism policy subunit 7023, wherein the recall subunit 7021 is used for recalling multi-item label information based on the search key information; the CTR estimation subunit 7022 is configured to score the multi-item label information based on a CTR estimation model, so that the terminal performs sorting display on the multi-item label information based on the CTR score; the mechanism policy subunit 7023 is configured to set recall and order policies;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
In one implementation, the CTR estimation subunit 7022 is further configured to: vectorizing the display characteristics to obtain display characteristic vectors; determining at least one of search features, user side features and content features as query features, and carrying out vectorization processing on the query features to obtain query vectors; and according to an attention mechanism, establishing attention weight distribution of the query vector to each item of display feature vector so as to determine the attention weight of each item of display feature.
In one implementation, the presentation features include a head map feature, a title feature, a text feature, a discrete class feature, or a continuous class feature;
the CTR estimation subunit 7022 is further configured to:
performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction on the initial head map feature vectors to obtain head map feature vectors;
aiming at the title feature, carrying out feature vectorization by a word embedding vector method to obtain a title feature vector;
performing feature migration on the text features through a neural network model to obtain text feature vectors;
mapping the discrete class features to a dense vector space to obtain discrete class feature vectors;
and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors.
In one implementation, the CTR estimation model is a CTR estimation model adopting a distillation network structure, wherein information captured by a mentor network is migrated to a student network through a distillation loss minimization constraint mechanism, and the student network is used for CTR estimation scoring.
Referring to fig. 8, a schematic structural diagram of a training device of a predictive model based on multi-modal feature fusion according to an embodiment of the present application is shown. The training device of the estimation model based on the multi-mode feature fusion comprises:
the feature determining unit 801 is configured to determine and obtain input features of the pre-estimated model: original features, fusion features, posterior features;
training unit 802, configured to train the distillation network structure to obtain a click through rate CTR estimation model based on the original feature, the fusion feature, and the posterior feature;
the feature determining unit 801 further includes a feature fusion processing subunit 8011, configured to determine at least two display features, determine an attention weight of each display feature according to an attention mechanism, and fuse the at least two display features according to the attention weight of each display feature to obtain the fused feature.
In one implementation, the CTR estimation model is a CTR estimation model adopting a distillation network structure, wherein information captured by a mentor network is migrated to a student network through a distillation loss minimization constraint mechanism, and the student network is used for CTR estimation scoring.
In one implementation, the feature fusion processing subunit 8011 is specifically configured to: vectorizing the display characteristics to obtain display characteristic vectors; determining at least one of search features, user side features and content features as query features, and carrying out vectorization processing on the query features to obtain query vectors; and according to an attention mechanism, establishing attention weight distribution of the query vector to each item of display feature vector so as to determine the attention weight of each item of display feature.
In one implementation, the presentation features include a head map feature, a title feature, a text feature, a discrete class feature, or a continuous class feature;
the feature fusion processing subunit 8011 is specifically configured to:
performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction on the initial head map feature vectors to obtain head map feature vectors;
aiming at the title feature, carrying out feature vectorization by a word embedding vector method to obtain a title feature vector;
performing feature migration on the text features through a neural network model to obtain text feature vectors;
mapping the discrete class features to a dense vector space to obtain discrete class feature vectors;
and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors. Embodiments of the present application also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
(1) Determining search key information received by a search box;
(2) Sending the search key information to a server side, enabling the server side to recall multi-item target information based on the search key information, and carrying out CTR scoring on the multi-item target information based on a click through rate CTR estimation model;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
(3) And receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
Or,
(1) Receiving search key information sent by a terminal;
(2) Recalling multi-item label information based on the search key information, and carrying out CTR scoring on the multi-item label information based on a click through rate CTR estimation model;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
(3) And returning the multi-item label information to the terminal, wherein the multi-item label information is ordered based on the CTR score.
Or,
(1) Determining and acquiring input features of a pre-estimated model: original features, fusion features, posterior features;
(2) Training a distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model;
the method comprises the steps of determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the present application also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
(1) Determining search key information received by a search box;
(2) Sending the search key information to a server side, enabling the server side to recall multi-item target information based on the search key information, and carrying out CTR scoring on the multi-item target information based on a click through rate CTR estimation model;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
(3) And receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
Or,
(1) Receiving search key information sent by a terminal;
(2) Recalling multi-item label information based on the search key information, and carrying out CTR scoring on the multi-item label information based on a click through rate CTR estimation model;
the pre-estimated model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
(3) And returning the multi-item label information to the terminal, wherein the multi-item label information is ordered based on the CTR score.
Or,
(1) Determining and acquiring input features of a pre-estimated model: original features, fusion features, posterior features;
(2) Training a distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model;
the method comprises the steps of determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. The searching method based on the multi-mode feature fusion is characterized by comprising the following steps of:
determining search key information received by a search box;
sending the search key information to a server side, enabling the server side to recall multi-item target information based on the search key information, and carrying out CTR scoring on the multi-item target information based on a click through rate CTR estimation model;
the CTR estimation model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
and receiving and displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
2. The searching method based on the multi-mode feature fusion is characterized by comprising the following steps of:
receiving search key information sent by a terminal;
recalling multi-item label information based on the search key information, and carrying out CTR scoring on the multi-item label information based on a click through rate CTR estimation model;
the CTR estimation model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
and returning the multi-item label information to the terminal, wherein the multi-item label information is ordered based on the CTR score.
3. The method of claim 2, wherein determining the attention weight for each presentation feature based on the attention mechanism comprises:
vectorizing the display characteristics to obtain display characteristic vectors;
determining at least one of search features, user side features and content features as query features, and carrying out vectorization processing on the query features to obtain query vectors;
And according to an attention mechanism, establishing attention weight distribution of the query vector to each item of display feature vector so as to determine the attention weight of each item of display feature.
4. The method of claim 3, wherein the presentation features comprise a head map feature, a title feature, a text feature, a discrete class feature, or a continuous class feature;
the vectorizing processing is carried out on the display characteristics to obtain display characteristic vectors, which comprises the following steps:
performing feature extraction on the head map features through a convolutional neural network model to obtain initial head map feature vectors, and performing dimension reduction on the initial head map feature vectors to obtain head map feature vectors;
aiming at the title feature, carrying out feature vectorization by a word embedding vector method to obtain a title feature vector;
performing feature migration on the text features through a neural network model to obtain text feature vectors;
mapping the discrete class features to a dense vector space to obtain discrete class feature vectors;
and carrying out discretization processing on the continuous class characteristics, and carrying out vectorization on the discretized characteristics to obtain continuous class characteristic vectors.
5. A training method of a predictive model based on multi-modal feature fusion is characterized by comprising the following steps:
Determining and acquiring input features of a pre-estimated model: original features, fusion features, posterior features;
training a distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model;
the method comprises the steps of determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
6. The utility model provides a search device based on multimodal feature fusion which is characterized in that is used for the terminal, includes:
a search key information determining unit for determining the search key information received by the search box;
the transmission unit is used for transmitting the search key information to a server side, so that the server side recalls the multi-item target information based on the search key information and carries out CTR scoring on the multi-item target information based on a click through rate CTR estimation model; and is used for receiving the multi-item label information returned by the server;
the CTR estimation model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature;
And the display unit is used for displaying the multi-item label information, wherein the multi-item label information is ordered based on a CTR score.
7. The utility model provides a search device based on multimodal feature fusion which is characterized in that is used for the server, includes:
the transmission unit is used for receiving the search key information sent by the terminal and returning multi-item label information to the terminal;
the recommendation unit comprises a recall subunit, a click through rate CTR estimation subunit and a mechanism strategy subunit, wherein the recall subunit is used for recalling multi-item label information based on the search key information; the CTR estimation subunit is used for carrying out CTR scoring on the multi-item target information based on a CTR estimation model, so that the terminal carries out sequencing display on the multi-item target information based on the CTR scoring;
the CTR estimation model is obtained based on original features, fusion features and posterior feature training; the acquisition mode of the fusion characteristic comprises the following steps: determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
8. The utility model provides a training device based on multimode characteristic fuses estimated model which characterized in that includes:
the feature determining unit is used for determining and acquiring input features of the pre-estimated model: original features, fusion features, posterior features;
the training unit is used for training the distillation network structure based on the original features, the fusion features and the posterior features to obtain a click through rate CTR estimation model;
the feature determining unit further comprises a feature fusion processing subunit, wherein the feature fusion processing subunit is used for determining at least two display features, determining the attention weight of each display feature according to an attention mechanism, and fusing the at least two display features according to the attention weight of each display feature to obtain the fused feature.
9. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when run.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 5.
CN202310011323.XA 2023-01-05 2023-01-05 Searching method, device, medium and equipment based on multi-mode feature fusion Pending CN116401420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310011323.XA CN116401420A (en) 2023-01-05 2023-01-05 Searching method, device, medium and equipment based on multi-mode feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310011323.XA CN116401420A (en) 2023-01-05 2023-01-05 Searching method, device, medium and equipment based on multi-mode feature fusion

Publications (1)

Publication Number Publication Date
CN116401420A true CN116401420A (en) 2023-07-07

Family

ID=87012936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310011323.XA Pending CN116401420A (en) 2023-01-05 2023-01-05 Searching method, device, medium and equipment based on multi-mode feature fusion

Country Status (1)

Country Link
CN (1) CN116401420A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132591A (en) * 2023-10-24 2023-11-28 杭州宇谷科技股份有限公司 Battery data processing method and system based on multi-mode information
CN118415601A (en) * 2024-07-04 2024-08-02 荣耀终端有限公司 Pulse wave velocity measurement method and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132591A (en) * 2023-10-24 2023-11-28 杭州宇谷科技股份有限公司 Battery data processing method and system based on multi-mode information
CN117132591B (en) * 2023-10-24 2024-02-06 杭州宇谷科技股份有限公司 Battery data processing method and system based on multi-mode information
CN118415601A (en) * 2024-07-04 2024-08-02 荣耀终端有限公司 Pulse wave velocity measurement method and electronic equipment

Similar Documents

Publication Publication Date Title
CN111553754B (en) Updating method and device of behavior prediction system
CN110737783B (en) Method and device for recommending multimedia content and computing equipment
EP2202646B1 (en) Dynamic presentation of targeted information in a mixed media reality recognition system
CN116401420A (en) Searching method, device, medium and equipment based on multi-mode feature fusion
CN108885624B (en) Information recommendation system and method
CN107924401A (en) Video recommendations based on video title
CN106126582A (en) Recommend method and device
CN110390033A (en) Training method, device, electronic equipment and the storage medium of image classification model
CN111310011B (en) Information pushing method and device, electronic equipment and storage medium
CN113742567B (en) Recommendation method and device for multimedia resources, electronic equipment and storage medium
CN111309940A (en) Information display method, system, device, electronic equipment and storage medium
CN111597446B (en) Content pushing method and device based on artificial intelligence, server and storage medium
CN110532351A (en) Recommend word methods of exhibiting, device, equipment and computer readable storage medium
CN102934113A (en) Information provision system, information provision method, information provision device, program, and information recording medium
CN113569129A (en) Click rate prediction model processing method, content recommendation method, device and equipment
CN111831924A (en) Content recommendation method, device, equipment and readable storage medium
CN115659008B (en) Information pushing system, method, electronic equipment and medium for big data information feedback
CN111711869A (en) Label data processing method and device and computer readable storage medium
CN111259257A (en) Information display method, system, device, electronic equipment and storage medium
CN109961351A (en) Information recommendation method, device, storage medium and computer equipment
KR20050050016A (en) On-line advertising system and method
CN112862567A (en) Exhibit recommendation method and system for online exhibition
CN114862480A (en) Advertisement putting orientation method and its device, equipment, medium and product
CN113641855A (en) Video recommendation method, device, equipment and storage medium
CN116975426A (en) Service data processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination