CN112861963A - Method, device and storage medium for training entity feature extraction model - Google Patents

Method, device and storage medium for training entity feature extraction model Download PDF

Info

Publication number
CN112861963A
CN112861963A CN202110159018.6A CN202110159018A CN112861963A CN 112861963 A CN112861963 A CN 112861963A CN 202110159018 A CN202110159018 A CN 202110159018A CN 112861963 A CN112861963 A CN 112861963A
Authority
CN
China
Prior art keywords
entity
target
entities
sample
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110159018.6A
Other languages
Chinese (zh)
Inventor
张卓
王立平
齐裕
程佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110159018.6A priority Critical patent/CN112861963A/en
Publication of CN112861963A publication Critical patent/CN112861963A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device and a storage medium for training an entity feature extraction model, and belongs to the technical field of internet. The method comprises the following steps: determining a target entity display scene to which an entity feature extraction model to be trained belongs; determining a target classification attribute corresponding to a target entity display scene based on a corresponding relation between a pre-stored entity display scene and a classification attribute, and classifying each entity based on the target classification attribute, wherein the attribute values of the target classification attributes of similar entities are the same; determining a target sample entity, determining a positive sample entity corresponding to the target sample entity, and determining a negative sample entity corresponding to the target sample entity in homogeneous entities of the target sample entity; training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity. By adopting the method and the device, the training of the entity feature extraction model is facilitated, and a better training effect is achieved.

Description

Method, device and storage medium for training entity feature extraction model
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, and a storage medium for training an entity feature extraction model.
Background
With the rapid development of internet technology, various internet applications are full of people's daily life, work and study. In an internet application program, the server generally has an information push function, in which some information is pushed to a terminal by the server, so that the terminal presents corresponding information to a user, where the corresponding information may be event notification information, activity information, entity promotion information, and the like. For entity promotion information, the entity can be a user account, a merchant account, and the like. For example, the server sends the display information of other user accounts to a certain user account as friend recommendation, and for example, the server sends the display information of a certain merchant account to a certain user account as consumption recommendation, and the like.
When the server promotes the entity to the target entity (user account), the server may firstly extract the characteristics of the entity based on the entity characteristic extraction model to obtain the characteristic information, and then, based on the characteristic information of the target entity and the characteristic information of other entities, search the entity with higher matching degree with the characteristic information for the target entity, and use the entity as the entity to be displayed, so as to promote the entity. Here, the matching degree of the feature information is used to reflect the correlation degree between the entities.
The premise of the popularization process is that the entity feature extraction model needs to be trained, and a large number of sample entities are needed in the training process. When a sample entity is obtained, graph data corresponding to the entity may be obtained first, where the graph data includes a large number of nodes, a connection edge exists between some nodes, the nodes represent the entity, which may be a user account or a merchant account, and the connection edge represents that a certain specified association exists between the entities, for example, when the user account accesses the merchant account, a connection edge is set between the nodes corresponding to the user account or the merchant account. After the graph data is obtained, a first node corresponding to a certain user account is selected in the graph data, and then a second node with a connecting edge with the first node is selected in the graph data, wherein the second node corresponds to a certain merchant account. The user account and the merchant account are used as sample entities, and nodes corresponding to the user account and the merchant account have connecting edges, so that the user account and the merchant account have access relations, and the matching degree of the characteristic information of the user account and the merchant account is high, so that an entity characteristic extraction model can be trained based on the sample entities.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
in the training process, the sample entities selected based on the graph data are all positive samples, that is, a strong correlation exists between the two sample entities, but a negative sample with a weak correlation is absent, so that the training effect on the entity feature extraction model is influenced, and the correlation between the entities cannot be accurately reflected by the matching degree of the extracted feature information.
Disclosure of Invention
The embodiment of the application provides a method, a device and a storage medium for training an entity feature extraction model, which can solve the problems that the training effect of the entity feature extraction model is influenced, and the matching degree of extracted feature information cannot accurately reflect the correlation degree between entities. The technical scheme is as follows:
in a first aspect, a method for training a solid feature extraction model is provided, the method including:
determining a target entity display scene to which an entity feature extraction model to be trained belongs;
determining a target classification attribute corresponding to a target entity display scene based on a corresponding relation between a pre-stored entity display scene and a classification attribute, and classifying each entity based on the target classification attribute, wherein the attribute values of the target classification attributes of similar entities are the same;
determining a target sample entity, determining a positive sample entity corresponding to the target sample entity, and determining a negative sample entity corresponding to the target sample entity in homogeneous entities of the target sample entity;
training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity.
Optionally, the determining a positive sample entity corresponding to the target sample entity includes:
acquiring graph data corresponding to the target sample entity;
and in the graph data, determining a second node with a connecting edge between first nodes corresponding to the target sample entity, and determining that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity.
Optionally, the determining, in the same type entity of the target sample entity, a negative sample entity corresponding to the target sample entity includes:
and determining a preset number of entities without connecting edges between a third node and the first node corresponding to the graph data in the same type entities of the target sample entities as negative sample entities corresponding to the target sample entities.
Optionally, the determining, in the similar entities of the target sample entity, a preset number of entities without a connection edge between a corresponding third node and the first node as a negative sample entity corresponding to the target sample entity includes:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node corresponding to the currently selected entity and the first node or not in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
Optionally, the training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity includes:
obtaining attribute values of feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity;
respectively inputting attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity;
calculating a first similarity between the second feature information and the first feature information, and calculating a second similarity between the third feature information and the first feature information;
and training the entity feature extraction model based on the first similarity and the second similarity.
Optionally, the training the entity feature extraction model based on the first similarity and the second similarity includes:
inputting the difference value of the first similarity and the second similarity into a loss function to obtain an adjusting value of each parameter to be adjusted in the entity characteristic extraction model;
and carrying out numerical value adjustment on each parameter to be adjusted in the entity characteristic extraction model based on the adjustment value of each parameter to be adjusted.
Optionally, after the training of the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity, the method further includes:
and respectively inputting the attribute values of the feature reference attributes of a plurality of entities in the database into the trained entity feature extraction model corresponding to the target entity display scene to obtain the feature information corresponding to each entity.
Optionally, the method further includes:
receiving an entity display request sent by a first entity, wherein an entity display scene corresponding to the entity display request is a target entity display scene, and the first entity is a user account;
acquiring fourth characteristic information corresponding to the first entity;
acquiring a plurality of second entities, wherein the second entities are merchant accounts;
acquiring fifth characteristic information corresponding to each second entity;
selecting a second entity, of which the similarity of the corresponding fifth characteristic information and the fourth characteristic information meets a preset condition, from the plurality of second entities as an entity to be displayed;
and sending the display information of the entity to be displayed to the first entity.
In a second aspect, an apparatus for training a solid feature extraction model is provided, the apparatus comprising:
the scene determining module is used for determining a target entity display scene to which the entity feature extraction model to be trained belongs;
the classification module is used for determining a target classification attribute corresponding to the target entity display scene based on a pre-stored corresponding relation between the entity display scene and the classification attribute, and classifying the entities based on the target classification attribute, wherein the attribute values of the target classification attributes of the similar entities are the same;
the system comprises a sample determining module, a positive sample entity and a negative sample entity, wherein the sample determining module is used for determining a target sample entity, determining the positive sample entity corresponding to the target sample entity, and determining the negative sample entity corresponding to the target sample entity in the homogeneous entity of the target sample entity;
a training module for training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity.
Optionally, the sample determining module is configured to:
acquiring graph data corresponding to the target sample entity;
and in the graph data, determining a second node with a connecting edge between first nodes corresponding to the target sample entity, and determining that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity.
Optionally, the sample determining module is configured to:
and determining a preset number of entities without connecting edges between a third node and the first node corresponding to the graph data in the same type entities of the target sample entities as negative sample entities corresponding to the target sample entities.
Optionally, the sample determining module is configured to:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node corresponding to the currently selected entity and the first node or not in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
Optionally, the training module is configured to:
obtaining attribute values of feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity;
respectively inputting attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity;
calculating a first similarity between the second feature information and the first feature information, and calculating a second similarity between the third feature information and the first feature information;
and training the entity feature extraction model based on the first similarity and the second similarity.
Optionally, the training module is configured to:
inputting the difference value of the first similarity and the second similarity into a loss function to obtain an adjusting value of each parameter to be adjusted in the entity characteristic extraction model;
and carrying out numerical value adjustment on each parameter to be adjusted in the entity characteristic extraction model based on the adjustment value of each parameter to be adjusted.
Optionally, the apparatus further includes an extraction module, configured to:
and respectively inputting the attribute values of the feature reference attributes of a plurality of entities in the database into the trained entity feature extraction model corresponding to the target entity display scene to obtain the feature information corresponding to each entity.
Optionally, the apparatus further comprises a display module, configured to:
receiving an entity display request sent by a first entity, wherein an entity display scene corresponding to the entity display request is a target entity display scene, and the first entity is a user account;
acquiring fourth characteristic information corresponding to the first entity;
acquiring a plurality of second entities, wherein the second entities are merchant accounts;
acquiring fifth characteristic information corresponding to each second entity;
selecting a second entity, of which the similarity of the corresponding fifth characteristic information and the fourth characteristic information meets a preset condition, from the plurality of second entities as an entity to be displayed;
and sending the display information of the entity to be displayed to the first entity.
In a third aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method for training a entity feature extraction model according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for training a entity feature extraction model according to the first aspect.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the classification attributes are determined according to the entity display scene to which the entity feature extraction model belongs, then the entities are classified, then the positive sample entities are selected according to the target sample entities, the negative sample entities are selected from the similar entities, and then the entity feature extraction model is trained. Therefore, when the entity feature extraction model is trained, the positive sample entity and the negative sample entity are adopted, the negative sample entity and the target sample entity belong to the same classification, and weak correlation exists, so that the entity feature extraction model is favorably trained, and compared with the method only adopting the target sample entity and the corresponding positive sample entity, the method has a better training effect, and the matching degree of the extracted feature information can more accurately reflect the correlation degree between the entities.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for training an entity feature extraction model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a barrel separation process provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a process for setting an entity identifier according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a process for determining negative sample entities according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for training an entity feature extraction model according to an embodiment of the present disclosure;
FIG. 6 is a flow chart of a method for displaying information provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an apparatus for training a solid feature extraction model according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a method for training an entity feature extraction model, and an execution subject of the method can be a server. The server may be a background server of an application program, the application program may be an application program with an information pushing function, and the application program may be a consumer application program and the like. The server may be a single server or a server group, and if the server is a single server, the server may be responsible for all processing in the following scheme, and if the server is a server group, different servers in the server group may be respectively responsible for different processing in the following scheme, and the specific processing allocation condition may be arbitrarily set by a technician according to actual needs, and is not described herein again.
The server may include components such as a processor, memory, and communication components. The processor is respectively connected with the memory and the communication component.
The processor may be a Central Processing Unit (CPU). The processor may be configured to determine an entity feature extraction model to be trained, may be configured to classify a large number of entities, may be configured to select a target sample entity, and select a positive sample entity and a negative sample entity corresponding to the target sample entity, and is configured to train the entity feature extraction model to be trained based on the selected sample entity, and so on.
The Memory may include a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic disk, an optical data storage device, and the like. The memory can be used for storing data, generated intermediate data, generated result data and the like which need to be prestored in the training and using processes of the entity feature extraction model, such as a large amount of related attribute information of entities, model parameters before and after the adjustment of the entity feature extraction model, target sample entities, positive sample entities, negative sample entities and the like.
The communication means may be a wired network connector, a WiFi (Wireless Fidelity) module, a bluetooth module, a cellular network communication module, etc. The communication component may be used for data transmission with other devices, and the other devices may be other servers, terminals, and the like. For example, the communication component may receive a search request sent by the terminal.
The entities in the embodiment of the application can be a user account and a merchant account. The graph data may be established only for the merchant account, so that each node in the graph data corresponds to one merchant account, and the connecting edge indicates that two merchant accounts corresponding to two connected nodes are accessed by a user in a session (section) process. The graph data may also be established for the merchant account and the user account, so that the graph data includes a node corresponding to the merchant account and a node corresponding to the user account, the meaning of the connection edge between the node of the merchant account (for short, the merchant node) and the node of the merchant account is the same as the above, and the connection edge between the node of the merchant account and the node of the user account (for short, the user node) indicates that the user account has accessed the merchant account.
Of course, there may be other possibilities for the entity, such as a commodity, and the corresponding graph data may also include a node corresponding to the commodity. In this embodiment, the entity is taken as an example of a user account and a merchant account, and the node in the graph data includes a merchant node and a user node, which are taken as examples, to perform detailed description of the scheme, and other cases are similar to these, and this embodiment is not described again.
It should be noted that the "classification" mentioned in the embodiments of the present application may also be referred to as "bucket sorting".
Fig. 1 is a flowchart of a method for training an entity feature extraction model according to an embodiment of the present application, where the method includes:
101, determining a target entity display scene to which an entity feature extraction model to be trained belongs.
The entity feature extraction model to be trained may be an initial entity feature extraction model, that is, an entity feature extraction model that has not been trained yet, or may be an entity feature extraction model that has been trained several times.
The entity exposure scene can also be called a business scene, and is an application scene exposed to the entity in the application program. For example, merchant detail page scenes, local life scenes, search advertisement scenes, and the like may be available in the integrated consumer application. The merchant detail page scene is a scene in which the entity is displayed in the display window at the lower part of the merchant detail page. The local life scene is a scene of entity display in a display window of 'guessing you like' at the lower part of the home page of the application program. The search advertisement scenario is a scenario in which an entity is shown in a search result window. The entity shown above may be a merchant account, but may be other possible entities. The above only lists a part of entity display scenarios, and there are many other possible entity scenarios, and the embodiments of the present application are not described one by one.
In implementation, technicians may respectively establish entity feature extraction models for different entity display scenes, and the entity feature extraction models established for different entity display scenes may have different structures and employ different algorithms, or may have the same structures and employ the same algorithms. Thus, an initial entity feature extraction model corresponding to each entity display scene is obtained, and then the initial entity feature extraction model of each entity display scene can be trained. A technician may select an entity feature extraction model to be trained, and the server may determine an entity display scene corresponding to the entity feature extraction model, that is, a target entity display scene.
And 102, determining a target classification attribute corresponding to the target entity display scene based on the corresponding relationship between the entity display scene and the classification attribute which are stored in advance.
The classification attribute is an attribute referred to when classifying the entity, such as an attribute of a city where the entity is located, an attribute of a category, and the like. The attribute value of the city attribute can be Beijing, Tianjin, Shanghai, Nanjing, etc., and the attribute value of the category attribute can be gourmet, take-out, hotel, movie, etc.
In implementation, a technician may set classification attributes for various different entity display scenes, and at this time, the setting is performed in consideration of display requirements and display characteristics of the different entity display scenes. For example, some entity display scenes are suitable for pushing according to regions, and some entity display scenes are suitable for pushing according to categories. The classification attribute corresponding to the local life scene may be a city attribute, because the demands of the user in the "guess you like" window may be various, as long as the user is in the city. The category attribute corresponding to the search advertisement scenario may be a category attribute, and since the user now has a clear search requirement, he is certainly an entity that wants to find a certain category. The category attribute corresponding to the merchant detail page scene may also be a category attribute, because the user is currently browsing a page of a certain merchant, the user may want to browse a certain category entity.
Further, a correspondence table between the entity display scenario and the classification attribute may be established, and stored in the database as shown in table 1.
TABLE 1
Entity display scenario Classification Attribute
Local life scene City attribute
Searching for advertising scenes Category attributes
Merchant detail page scene Category attributes
…… ……
After the target entity display scene to which the entity feature extraction model to be trained belongs is determined, the target classification attribute corresponding to the target entity display scene can be searched in the corresponding relation table.
And 103, classifying the entities based on the target classification attribute.
Wherein, the attribute values of the target classification attributes of the same type entities are the same.
In practice, the classification referred to herein may also be referred to as binning. The server may retrieve attribute values of the target classification attributes of the entities corresponding to the graph data in the database. And then, based on the called attribute values, performing bucket classification on the entities, and classifying the entities with the same attribute values into one bucket, as shown in fig. 2. For example, if the target classification attribute is a city attribute, the server may call attribute values of the city attribute of each merchant account, such as beijing, tianjin, shanghai, and the like, allocate merchant accounts with the attribute value of beijing to one bucket, merchant accounts with the attribute value of tianjin to one bucket, merchant accounts with the attribute value of shanghai to one bucket, and the like. Here, if the entity includes a user account and a merchant, the user account and the merchant account are separately bucketed, i.e., the user account and the merchant account are not bucketed in the same bucket.
A bucket identification (bucket _ id) may be set for each of the resulting buckets, and an intra-bucket number (index _ in _ bucket) may be set for each entity in the bucket. For example, binary numbers may be used, the length of which may be 64 bits (from low to high), 0 to 47 bits are encoded as the number in the bucket, and 48 to 63 bits are encoded as the bucket identifier, as shown in fig. 3, so that each binary code may be mapped to a 64-bit LONG type value, which may be used as the entity identifier.
In addition, the server may record the number of entities (bucket _ size) contained in each bucket, store the number of entities in correspondence with the bucket identifier, and store the number of entities in a key-value pair format, such as < bucket _ id, bucket _ size >.
And 104, determining a target sample entity.
In practice, one or more entities may be randomly selected as sample entities from a large number of entities, or selected as sample entities based on certain rules or conditions. Each sample entity may be referred to as a target sample entity for which subsequent steps of processing are performed.
And 105, determining a positive sample entity corresponding to the target sample entity.
The positive sample entity is an entity having strong correlation with the target sample entity, and such strong correlation can be generally regarded as a connecting edge corresponding to the positive sample entity and the target sample entity in the graph data.
In an implementation, the server may determine, in the graph data, a second node having a connection edge between the first nodes corresponding to the target sample entities, and determine that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity. When the target sample entity is a merchant account, the found positive sample entity may be a merchant account or a user account, if the target sample entity is the user account, it indicates that the user account accessed the target sample is a merchant account corresponding to the living body, and if the target sample entity is the merchant account, it indicates that a certain user account accessed the merchant account corresponding to the target sample account and the merchant account corresponding to the positive sample entity in a session process. In these cases above, there is a strong correlation between the positive sample entity and the target sample entity. One or more positive sample entities may be selected for one target sample entity, and the number of positive sample entities selected may be preset.
The session is explained as follows: when a user enters an application program and the application program interacts with a server, a session between the application program and the server is established first, the session is cancelled when the user does not operate for a long time, and the session is established again when the user operates the application program again and interacts with the server.
And 106, determining a negative sample entity corresponding to the target sample entity in the homogeneous entity of the target sample entity.
The negative sample entity is an entity with weak correlation with the target sample entity, the weak correlation is not irrelevant, and the weak correlation means that the negative sample entity and the target sample entity have certain correlation but do not have strong correlation. In the embodiment of the present application, the weak correlation may be considered that the two belong to the same category but no connecting edge is corresponding between the two in the graph data, which indicates that there is a certain correlation between the two belonging to the same category, and which indicates that there is no strong correlation between the two not corresponding to the connecting edge.
In implementation, the server determines, among the same-class entities of the target sample entity, a preset number of entities without a connecting edge between a third node and a first node corresponding to the graph data as negative sample entities corresponding to the target sample entity. A preset number of negative sample entities may be selected corresponding to one target sample entity, and the number of the negative sample entities may be the same as or different from the number of the positive sample entities.
The way of selecting negative sample entities among the homogeneous entities of the target sample entity may be various, and a practical specific way is given below:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node and a first node corresponding to the currently selected entity in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
Based on the example of the 64-bit encoding method, the method for selecting the negative sample entity is further described. As shown in fig. 4, after a target sample entity is determined, an entity identifier of the target sample entity may be obtained, bucket identifiers included in the target sample entity are determined based on the entity identifier, then an entity number bucket _ size corresponding to the bucket identifier is queried through < bucket _ id, bucket _ size >, and further random numbers ranging from 0 to bucket _ size-1 may be generated one by one, each random number is generated, the random number is used as an index _ in _ bucket and is combined with the bucket _ id to obtain an entity identifier, whether the entity identifier is the same as the entity identifier of the target sample entity is determined, if the entity identifier is the same as the entity identifier of the target sample entity, the entity identifier is discarded and a next random number is obtained, if the entity identifier is different from the index _ in _ bucket, whether a connecting edge exists between a third node and a first node corresponding to the entity identifier in the graph data, if a connecting edge exists, the entity identifier is discarded and the next random number is obtained, if a connecting edge does not exist, and determining the entity corresponding to the entity identifier as a negative sample entity, and continuously acquiring the next random number for processing. And stopping the processing until the number of the negative sample entities reaches a preset number.
And 107, training the entity feature extraction model based on the target sample entity, the positive sample entity and the negative sample entity.
In implementation, a target sample entity, a positive sample entity and a negative sample entity may be grouped into a set of samples, and each set of samples may be trained on the entity feature extraction model. For example, for target sample entity a, two positive samples B1 and B2 are obtained, two negative samples C1 and C2 are obtained, A, B1 and C1 may be combined into a group of samples, A, B1 and C2 may be combined into a group of samples, A, B2 and C1 may be combined into a group of samples, and A, B2 and C2 may be combined into a group of samples, so that 4 groups of samples are obtained in total, and 4 times of training may be performed.
Referring to the flow shown in fig. 5, a specific training process is described below, and the training process may include the following steps:
and 501, acquiring attribute values of the characteristic reference attributes of the target sample entity, the positive sample entity and the negative sample entity.
The characteristic reference attribute is an attribute used for extracting characteristics, and can clearly reflect characteristics of an entity, such as the category, dishes and per capita price of a merchant account, the favorite category, age, gender and the like of a user account, and the attribute value corresponding to the category can be western food, hot pot, barbecue and the like. The characteristic reference attribute may be preset by a skilled person.
502, respectively inputting the attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity.
The feature information may be a feature vector.
A first similarity between the second feature information and the first feature information is calculated 503, and a second similarity between the third feature information and the first feature information is calculated.
The similarity between different feature information may be a vector distance, and specifically may be a euclidean distance.
And 504, training the entity feature extraction model based on the first similarity and the second similarity.
In implementation, the server firstly inputs the difference value between the first similarity and the second similarity into a loss function to obtain an adjustment value of each parameter to be adjusted in the entity feature extraction model. Wherein, the larger the difference between the first similarity and the second similarity is, the better. And then, carrying out numerical value adjustment on each parameter to be adjusted in the entity characteristic extraction model based on the adjustment value of each parameter to be adjusted.
In the whole model training process, the training process can be repeatedly executed based on thousands of groups of samples until the model meets the training end condition, such as the training times or the accuracy of the model exceeds a preset threshold.
The embodiment of the present application further provides a method for displaying information, referring to fig. 6, the method may include the following steps:
601, respectively inputting attribute values of feature reference attributes of a plurality of entities in the database into the trained entity feature extraction model corresponding to the target entity display scene to obtain feature information corresponding to each entity.
In this embodiment, the server may periodically execute step 601 to update the feature information of the entity in the entire database, or may execute step 601 to update the feature information of an entity when the attribute value of the feature parameter attribute of any entity is updated.
And 602, receiving an entity display request sent by a first entity.
The entity display scene corresponding to the entity display request is a target entity display scene, and the first entity is a user account.
In implementation, a user operating an application may trigger the application to send an entity display request to a server in many operating scenarios, for example, when the user operating the application enters a home page or refreshes the home page, the application may be triggered to send an entity display request of a "guess you like" window to the server, for example, when the user sends a search request to the server through a search interface, the search request may be regarded as an entity display request, and for example, when the user accesses a merchant details page, the application may be triggered to send an entity display request of a "merchant details page" to the server.
603, obtaining fourth characteristic information corresponding to the first entity.
604, a plurality of second entities is obtained.
Wherein the second entity is a merchant account.
605, acquiring fifth feature information corresponding to each second entity.
Wherein, the above feature information may be a feature vector
And 606, selecting the second entity with the similarity of the corresponding fifth characteristic information and the fourth characteristic information meeting the preset condition from the plurality of second entities as the entity to be displayed.
The similarity between the feature information may be a vector distance, and specifically may be an euclidean distance.
In implementation, the similarity between each fifth feature information and the fourth feature information may be calculated, and the second entity corresponding to the fifth feature information with the highest similarity may be used as the entity to be displayed, or the second entity corresponding to the fifth feature information with the similarity greater than the preset threshold may be used as the entity to be displayed.
607, sending the display information of the entity to be displayed to the first entity.
The entity to be displayed may be a merchant account, and the display information may include a cover image (picture or video) of the entity to be displayed, a merchant name, a score, a distance from the first entity, a sales volume, and the like.
In implementation, the server sends the display information of the entity to be displayed to the terminal logged in by the first entity, and the terminal can display the display information in the window responded by the application program.
In the embodiment of the application, the classification attributes are determined according to the entity display scene to which the entity feature extraction model belongs, then the entities are classified, then the positive sample entity and the negative sample entity are selected from the similar entities according to the target sample entity, and then the entity feature extraction model is trained. Therefore, when the entity feature extraction model is trained, the positive sample entity and the negative sample entity are adopted, the negative sample entity and the target sample entity belong to the same classification, and weak correlation exists, so that the entity feature extraction model is favorably trained, and compared with the method only adopting the target sample entity and the corresponding positive sample entity, the method has a better training effect, and the matching degree of the extracted feature information can more accurately reflect the correlation degree between the entities.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
An embodiment of the present application provides an apparatus for training an entity feature extraction model, where the apparatus may be the server described above, and as shown in fig. 7, the apparatus includes:
a scene determining module 710, configured to determine a target entity display scene to which the entity feature extraction model to be trained belongs;
a classification module 720, configured to determine a target classification attribute corresponding to a target entity display scene based on a pre-stored correspondence between the entity display scene and the classification attribute, and classify each entity based on the target classification attribute, where attribute values of the target classification attributes of similar entities are the same;
a sample determining module 730, configured to determine a target sample entity, determine a positive sample entity corresponding to the target sample entity, and determine a negative sample entity corresponding to the target sample entity in a homogeneous entity of the target sample entity;
a training module 740, configured to train the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity.
Optionally, the sample determining module 730 is configured to:
acquiring graph data corresponding to the target sample entity;
and in the graph data, determining a second node with a connecting edge between first nodes corresponding to the target sample entity, and determining that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity.
Optionally, the sample determining module 730 is configured to:
and determining a preset number of entities without connecting edges between a third node and the first node corresponding to the graph data in the same type entities of the target sample entities as negative sample entities corresponding to the target sample entities.
Optionally, the sample determining module 730 is configured to:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node corresponding to the currently selected entity and the first node or not in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
Optionally, the training module 740 is configured to:
obtaining attribute values of feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity;
respectively inputting attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity;
calculating a first similarity between the second feature information and the first feature information, and calculating a second similarity between the third feature information and the first feature information;
and training the entity feature extraction model based on the first similarity and the second similarity.
Optionally, the training module 740 is configured to:
inputting the difference value of the first similarity and the second similarity into a loss function to obtain an adjusting value of each parameter to be adjusted in the entity characteristic extraction model;
and carrying out numerical value adjustment on each parameter to be adjusted in the entity characteristic extraction model based on the adjustment value of each parameter to be adjusted.
Optionally, the apparatus further includes an extraction module, configured to:
and respectively inputting the attribute values of the feature reference attributes of a plurality of entities in the database into the trained entity feature extraction model corresponding to the target entity display scene to obtain the feature information corresponding to each entity.
Optionally, the apparatus further comprises a display module, configured to:
receiving an entity display request sent by a first entity, wherein an entity display scene corresponding to the entity display request is a target entity display scene, and the first entity is a user account;
acquiring fourth characteristic information corresponding to the first entity;
acquiring a plurality of second entities, wherein the second entities are merchant accounts;
acquiring fifth characteristic information corresponding to each second entity;
selecting a second entity, of which the similarity of the corresponding fifth characteristic information and the fourth characteristic information meets a preset condition, from the plurality of second entities as an entity to be displayed;
and sending the display information of the entity to be displayed to the first entity.
In the embodiment of the application, the classification attributes are determined according to the entity display scene to which the entity feature extraction model belongs, then the entities are classified, then the positive sample entity and the negative sample entity are selected from the similar entities according to the target sample entity, and then the entity feature extraction model is trained. Therefore, when the entity feature extraction model is trained, the positive sample entity and the negative sample entity are adopted, the negative sample entity and the target sample entity belong to the same classification, and weak correlation exists, so that the entity feature extraction model is favorably trained, and compared with the method only adopting the target sample entity and the corresponding positive sample entity, the method has a better training effect, and the matching degree of the extracted feature information can more accurately reflect the correlation degree between the entities.
It should be noted that: in the apparatus for training an entity feature extraction model provided in the foregoing embodiment, when the entity feature extraction model is trained, only the division of the functional modules is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus for training the entity feature extraction model and the method for training the entity feature extraction model provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 800 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors 801 and one or more memories 802, where the memory 802 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 801 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of training a solid feature extraction model in the above embodiments is also provided. The computer readable storage medium may be non-transitory. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of training an entity feature extraction model, the method comprising:
determining a target entity display scene to which an entity feature extraction model to be trained belongs;
determining a target classification attribute corresponding to a target entity display scene based on a corresponding relation between a pre-stored entity display scene and a classification attribute, and classifying each entity based on the target classification attribute, wherein the attribute values of the target classification attributes of similar entities are the same;
determining a target sample entity, determining a positive sample entity corresponding to the target sample entity, and determining a negative sample entity corresponding to the target sample entity in homogeneous entities of the target sample entity;
training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity.
2. The method of claim 1, wherein the determining the positive sample entity corresponding to the target sample entity comprises:
acquiring graph data corresponding to the target sample entity;
and in the graph data, determining a second node with a connecting edge between first nodes corresponding to the target sample entity, and determining that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity.
3. The method according to claim 2, wherein the determining, among the homogeneous entities of the target sample entities, a negative sample entity corresponding to the target sample entity includes:
and determining a preset number of entities without connecting edges between a third node and the first node corresponding to the graph data in the same type entities of the target sample entities as negative sample entities corresponding to the target sample entities.
4. The method according to claim 3, wherein the determining, as the negative sample entity corresponding to the target sample entity, a preset number of entities without a connecting edge between the corresponding third node and the first node, in the homogeneous entity of the target sample entity includes:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node corresponding to the currently selected entity and the first node or not in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
5. The method of claim 1, wherein training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity comprises:
obtaining attribute values of feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity;
respectively inputting attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity;
calculating a first similarity between the second feature information and the first feature information, and calculating a second similarity between the third feature information and the first feature information;
and training the entity feature extraction model based on the first similarity and the second similarity.
6. The method of claim 5, wherein training the entity feature extraction model based on the first similarity and the second similarity comprises:
inputting the difference value of the first similarity and the second similarity into a loss function to obtain an adjusting value of each parameter to be adjusted in the entity characteristic extraction model;
and carrying out numerical value adjustment on each parameter to be adjusted in the entity characteristic extraction model based on the adjustment value of each parameter to be adjusted.
7. The method of any one of claims 1-6, wherein after training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity, further comprising:
and respectively inputting the attribute values of the feature reference attributes of a plurality of entities in the database into the trained entity feature extraction model corresponding to the target entity display scene to obtain the feature information corresponding to each entity.
8. The method of claim 7, further comprising:
receiving an entity display request sent by a first entity, wherein an entity display scene corresponding to the entity display request is a target entity display scene, and the first entity is a user account;
acquiring fourth characteristic information corresponding to the first entity;
acquiring a plurality of second entities, wherein the second entities are merchant accounts;
acquiring fifth characteristic information corresponding to each second entity;
selecting a second entity, of which the similarity of the corresponding fifth characteristic information and the fourth characteristic information meets a preset condition, from the plurality of second entities as an entity to be displayed;
and sending the display information of the entity to be displayed to the first entity.
9. An apparatus for training a feature extraction model of an entity, the apparatus comprising:
the scene determining module is used for determining a target entity display scene to which the entity feature extraction model to be trained belongs;
the classification module is used for determining a target classification attribute corresponding to the target entity display scene based on a pre-stored corresponding relation between the entity display scene and the classification attribute, and classifying the entities based on the target classification attribute, wherein the attribute values of the target classification attributes of the similar entities are the same;
the system comprises a sample determining module, a positive sample entity and a negative sample entity, wherein the sample determining module is used for determining a target sample entity, determining the positive sample entity corresponding to the target sample entity, and determining the negative sample entity corresponding to the target sample entity in the homogeneous entity of the target sample entity;
a training module for training the entity feature extraction model based on the target sample entity, the positive sample entity, and the negative sample entity.
10. The apparatus of claim 9, wherein the sample determination module is configured to:
acquiring graph data corresponding to the target sample entity;
and in the graph data, determining a second node with a connecting edge between first nodes corresponding to the target sample entity, and determining that an entity corresponding to the second node is a positive sample entity corresponding to the target sample entity.
11. The apparatus of claim 10, wherein the sample determination module is configured to:
and determining a preset number of entities without connecting edges between a third node and the first node corresponding to the graph data in the same type entities of the target sample entities as negative sample entities corresponding to the target sample entities.
12. The apparatus of claim 11, wherein the sample determination module is configured to:
and randomly selecting entities from the similar entities of the target sample entity, determining whether a connecting edge exists between a third node corresponding to the currently selected entity and the first node or not in the graph data every time one entity is selected, if the connecting edge does not exist between the third node and the first node, determining that the currently selected entity is a negative sample entity corresponding to the target sample entity, and finishing selecting the entities until the determined negative sample entity corresponding to the target sample entity reaches a preset number.
13. The apparatus of claim 9, wherein the training module is configured to:
obtaining attribute values of feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity;
respectively inputting attribute values of the feature reference attributes of the target sample entity, the positive sample entity and the negative sample entity into the entity feature extraction model to obtain first feature information corresponding to the target sample entity, second feature information corresponding to the positive sample entity and third feature information corresponding to the negative sample entity;
calculating a first similarity between the second feature information and the first feature information, and calculating a second similarity between the third feature information and the first feature information;
and training the entity feature extraction model based on the first similarity and the second similarity.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by the method of training a solid feature extraction model according to any one of claims 1 to 8.
15. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of training a entity feature extraction model according to any one of claims 1 to 8.
CN202110159018.6A 2021-02-04 2021-02-04 Method, device and storage medium for training entity feature extraction model Withdrawn CN112861963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110159018.6A CN112861963A (en) 2021-02-04 2021-02-04 Method, device and storage medium for training entity feature extraction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110159018.6A CN112861963A (en) 2021-02-04 2021-02-04 Method, device and storage medium for training entity feature extraction model

Publications (1)

Publication Number Publication Date
CN112861963A true CN112861963A (en) 2021-05-28

Family

ID=75988991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110159018.6A Withdrawn CN112861963A (en) 2021-02-04 2021-02-04 Method, device and storage medium for training entity feature extraction model

Country Status (1)

Country Link
CN (1) CN112861963A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269262A (en) * 2021-06-02 2021-08-17 腾讯音乐娱乐科技(深圳)有限公司 Method, apparatus and storage medium for training matching degree detection model
CN113342909A (en) * 2021-08-06 2021-09-03 中科雨辰科技有限公司 Data processing system for identifying identical solid models
CN113505256A (en) * 2021-07-02 2021-10-15 北京达佳互联信息技术有限公司 Feature extraction network training method, image processing method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144974A (en) * 2019-12-04 2020-05-12 北京三快在线科技有限公司 Information display method and device
CN111339443A (en) * 2020-03-09 2020-06-26 腾讯科技(深圳)有限公司 User label determination method and device, computer equipment and storage medium
CN111368205A (en) * 2020-03-09 2020-07-03 腾讯科技(深圳)有限公司 Data recommendation method and device, computer equipment and storage medium
CN111368934A (en) * 2020-03-17 2020-07-03 腾讯科技(深圳)有限公司 Image recognition model training method, image recognition method and related device
CN111831855A (en) * 2020-07-20 2020-10-27 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for matching videos
CN112069398A (en) * 2020-08-24 2020-12-11 腾讯科技(深圳)有限公司 Information pushing method and device based on graph network
CN112232384A (en) * 2020-09-27 2021-01-15 北京迈格威科技有限公司 Model training method, image feature extraction method, target detection method and device
CN112307256A (en) * 2020-10-28 2021-02-02 有半岛(北京)信息科技有限公司 Cross-domain recommendation and model training method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144974A (en) * 2019-12-04 2020-05-12 北京三快在线科技有限公司 Information display method and device
CN111339443A (en) * 2020-03-09 2020-06-26 腾讯科技(深圳)有限公司 User label determination method and device, computer equipment and storage medium
CN111368205A (en) * 2020-03-09 2020-07-03 腾讯科技(深圳)有限公司 Data recommendation method and device, computer equipment and storage medium
CN111368934A (en) * 2020-03-17 2020-07-03 腾讯科技(深圳)有限公司 Image recognition model training method, image recognition method and related device
CN111831855A (en) * 2020-07-20 2020-10-27 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for matching videos
CN112069398A (en) * 2020-08-24 2020-12-11 腾讯科技(深圳)有限公司 Information pushing method and device based on graph network
CN112232384A (en) * 2020-09-27 2021-01-15 北京迈格威科技有限公司 Model training method, image feature extraction method, target detection method and device
CN112307256A (en) * 2020-10-28 2021-02-02 有半岛(北京)信息科技有限公司 Cross-domain recommendation and model training method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269262A (en) * 2021-06-02 2021-08-17 腾讯音乐娱乐科技(深圳)有限公司 Method, apparatus and storage medium for training matching degree detection model
CN113269262B (en) * 2021-06-02 2024-06-14 腾讯音乐娱乐科技(深圳)有限公司 Method, apparatus and storage medium for training matching degree detection model
CN113505256A (en) * 2021-07-02 2021-10-15 北京达佳互联信息技术有限公司 Feature extraction network training method, image processing method and device
CN113342909A (en) * 2021-08-06 2021-09-03 中科雨辰科技有限公司 Data processing system for identifying identical solid models

Similar Documents

Publication Publication Date Title
WO2020048084A1 (en) Resource recommendation method and apparatus, computer device, and computer-readable storage medium
CN112861963A (en) Method, device and storage medium for training entity feature extraction model
CN110909182B (en) Multimedia resource searching method, device, computer equipment and storage medium
WO2019237541A1 (en) Method and apparatus for determining contact label, and terminal device and medium
CN107870984A (en) The method and apparatus for identifying the intention of search term
CN104915354B (en) Multimedia file pushing method and device
WO2005116873A1 (en) Contents search system for providing reliable contents through network and method thereof
CN109582847B (en) Information processing method and device and storage medium
CN110727857A (en) Method and device for identifying key features of potential users aiming at business objects
CN108717407A (en) Entity vector determines method and device, information retrieval method and device
US11470032B2 (en) Method for recommending groups and related electronic device
CN107977678A (en) Method and apparatus for output information
CN109947944A (en) Short message display method, device and storage medium
CN110110206B (en) Method, device, computing equipment and storage medium for mining and recommending relationships among articles
CN114528474A (en) Method and device for determining recommended object, electronic equipment and storage medium
CN111241401B (en) Search request processing method and device
CN112770126A (en) Live broadcast room pushing method and device, server and storage medium
CN112287208B (en) User portrait generation method, device, electronic equipment and storage medium
CN109344327B (en) Method and apparatus for generating information
CN111401969A (en) Method, device, server and storage medium for improving user retention rate
CN110210884B (en) Method, device, computer equipment and storage medium for determining user characteristic data
CN115878874A (en) Multimodal retrieval method, device and storage medium
CN115858815A (en) Method for determining mapping information, advertisement recommendation method, device, equipment and medium
CN108306812B (en) Data processing method and server
CN113010664B (en) Data processing method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210528

WW01 Invention patent application withdrawn after publication