CN117633360A - Multi-scene-based content recommendation method and device, electronic equipment and storage medium - Google Patents
Multi-scene-based content recommendation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117633360A CN117633360A CN202311684700.2A CN202311684700A CN117633360A CN 117633360 A CN117633360 A CN 117633360A CN 202311684700 A CN202311684700 A CN 202311684700A CN 117633360 A CN117633360 A CN 117633360A
- Authority
- CN
- China
- Prior art keywords
- scene
- data
- user
- content
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 239000013598 vector Substances 0.000 claims description 120
- 238000012545 processing Methods 0.000 claims description 39
- 238000000605 extraction Methods 0.000 claims description 33
- 230000006399 behavior Effects 0.000 claims description 28
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000013528 artificial neural network Methods 0.000 claims description 20
- 230000007246 mechanism Effects 0.000 claims description 16
- 238000012163 sequencing technique Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000012217 deletion Methods 0.000 claims description 5
- 230000037430 deletion Effects 0.000 claims description 5
- 238000013138 pruning Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 13
- 230000008901 benefit Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a content recommendation method, device, electronic equipment and storage medium based on multiple scenes, and relates to the field of artificial intelligence, in particular to the field of big data. The specific implementation scheme is as follows: acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period; and determining recommended content from preset candidate content according to the portrait data and the scene sequence data, and recommending the recommended content to a user. And combining the image data in different scenes to determine the content which is required to be recommended for the user at present, thereby improving the accuracy of content recommendation.
Description
Technical Field
The disclosure relates to the field of big data in the field of artificial intelligence, in particular to a content recommendation method, a device, electronic equipment and a storage medium based on multiple scenes.
Background
With the rapid development of the mobile internet, the artificial intelligence product needs to meet the refinement requirements of users in different scenes, and different contents are recommended to the users. However, there are great differences in users and needs in different scenarios, and there are also differences in the content recommended for the users.
If the content recommended to the user is not the content interested by the user, the use experience of the user on the artificial intelligence product is directly affected, so that accurate content recommendation to the user is an important challenge currently faced.
Disclosure of Invention
The disclosure provides a content recommendation method and device based on multiple scenes, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a content recommendation method based on multiple scenes, including:
acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period;
and determining recommended content from preset candidate content according to the portrait data and the scene sequence data, and recommending the recommended content to a user.
According to a second aspect of the present disclosure, there is provided a multi-scene based content recommendation apparatus, comprising:
the acquisition unit is used for acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period;
and the recommending unit is used for determining recommended content from preset candidate content according to the portrait data and the scene sequence data and recommending the recommended content to a user.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program which, when executed by a processor, implements the method of the first aspect.
According to the technology disclosed by the disclosure, the accuracy of content recommendation is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a content recommendation method based on multiple scenes according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of a content recommendation method based on multiple scenarios provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow diagram of a content recommendation method based on multiple scenarios provided in accordance with an embodiment of the present disclosure;
fig. 4 is a schematic diagram of connection relationships of a network structure provided according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a content recommendation device based on multiple scenes provided according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a multi-scenario based content recommendation device provided in accordance with an embodiment of the present disclosure;
FIG. 7 is a block diagram of an electronic device for implementing a multi-scenario based content recommendation method of an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing a multi-scenario based content recommendation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the rapid development of the mobile internet, the artificial intelligence product needs to meet the refinement requirements of users in different scenes, for example, the artificial intelligence product can be an application program product such as map software, the scenes can be a local scene, a different place scene, a traveling scene, a non-traveling scene and the like, and when the users are in different scenes, different contents need to be recommended for the users aiming at the different scenes. However, in different scenarios, there are great differences in content, users and requirements, and making accurate recommendations for users is a core challenge for research and development personnel.
Currently, content recommendation is performed for users in different scenes, and a corresponding proprietary model is usually trained for each scene separately. That is, data samples are collected independently for each scene and a separate recommendation model is trained. Specifically, samples are collected and respective proprietary models are trained for scenes such as different cities, different terminals, different crowds and the like. However, this method may cause the recommendation model to ignore the general information and hidden information of the scene, and affect the accuracy of content recommendation. In addition, each scenario requires maintenance of an independent model, which greatly increases system resource consumption and development cost investment.
The disclosure provides a content recommendation method, device, electronic equipment and storage medium based on multiple scenes, which are applied to the field of big data in the field of artificial intelligence so as to improve the accuracy of content recommendation.
Note that the model in this embodiment is not specific to a specific user, and does not reflect personal information of a specific user. It should be noted that, the data in this embodiment comes from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
In order for the reader to more fully understand the principles of the implementations of the present disclosure, the embodiments are now further refined in conjunction with the following fig. 1-8.
Fig. 1 is a flowchart of a multi-scenario-based content recommendation method according to an embodiment of the present disclosure, which may be performed by a multi-scenario-based content recommendation apparatus. As shown in fig. 1, the method comprises the steps of:
s101, obtaining portrait data and scene sequence data of a user; the portrait data represents the historical behavior of the user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period.
For example, when the user uses an application product such as map software, the used product may make a content recommendation for the user, for example, content such as food, parking lot, gas station, etc. near the destination may be recommended to the user according to the destination the user always goes to, or travel mode to the destination may be recommended to the user. The recommended content can be displayed on the interface in a preset content identifier, and after receiving the recommended content, the user can select to close the content identifier, namely, does not click on the recommended content; the content identification may also be clicked to view details of the recommended content.
The user may be in different scenes at different times, e.g., the scenes may include local scenes, off-site scenes, in-line scenes, non-in-line scenes, and so forth. The local scene refers to that the current place of the user is a holding place; the off-site scene means that the current place of the user is not a constant place; the in-line scene means that the user is currently on the way; non-on-line scenes means that the user is not currently on the way. The content recommended to the user may be different in different scenarios. For example, a local store may be recommended to the user when the user is in a local scenario, and a remote store may be recommended to the user when the user is in a remote scenario. The recommended content in different scenes may have consistency, for example, the user loves a delicious food, and then stores related to the delicious food can be recommended for the user whether the user is in a local scene or in a different place. That is, there may be consistency and variability between content recommended for users in different scenarios.
When content recommendation is performed for a user, portrait data and scene sequence data of the user in a period of time can be acquired. The portrayal data may represent a historical behavior of the user, the historical behavior representing a behavior of the user on recommended historical content in a scenario within a preset time period. For example, the user may perform a action on the recommended historical content by directly closing the recommended content or by clicking on the recommended content. The preset time period may be a time period of a preset length before the current time, for example, may be a time of one month before the current time, i.e., a historical behavior of the user within one month is obtained. The scene sequence data characterizes a plurality of scenes that a user experiences over a preset period of time, and may be represented by a set of scenes that have been experienced over the preset period of time. In the scene sequence data, the arrangement order of the scenes may be determined according to the time when the user experiences the scene. The last scene in the scene sequence data may be the scene in which the user is currently located. For example, within one month, the user first goes to a city at local a, then to a city at remote B, and then to a city at remote C, then the scene sequence data may be determined to be [ local, remote ].
Before the portrait data and the scene sequence data of the user are acquired, prompt information for data acquisition can be sent to the user, and the prompt information is used for inquiring whether the user allows to acquire the portrait data and the scene sequence data. For example, a screen shot may be displayed on the interface, in which relevant text is displayed whether the data is allowed to be acquired. The user can allow or disallow the acquisition, and if the user sends an instruction for allowing the acquisition, the portrait data and the scene sequence data of the user can be acquired; if the user issues an instruction to not allow acquisition, the portrait data and the scene sequence data of the user cannot be acquired.
The acquired image data may correspond to a scene in the scene sequence data, each image data may be labeled with the scene corresponding to the image data, for example, if the image data is generated in a local scene, the image data may be labeled with "0"; if the image data is generated in a different scene, the image data may be marked with "1". In this embodiment, the basic information of the user may also be acquired, for example, the basic information of the sex, age, etc. of the user may be acquired. The portrait data and the scene sequence data may be acquired once each time the user opens the product of the application, or may be acquired in real time or at regular time.
S102, determining recommended content from preset candidate content according to the portrait data and the scene sequence data, and recommending the recommended content to a user.
Illustratively, a plurality of candidate contents are preset, and the candidate contents are possibly recommended to the user. For example, the history content that was pushed to the user may be used as the candidate content, or the content other than the history content may be acquired from the big data as the candidate content. For example, all shops and all parking lots in the area where the user is currently located may be determined as the alternative content.
After the image data and the scene sequence data are obtained, the image data and the scene sequence data are processed, and one content is selected from the candidate contents as the recommended content according to the processed result. For example, content that may be required by a user in different scenarios may be determined from the portrait data, and the content that may be required may be historical content that the user has clicked through to view. And determining historical contents of the click-through view of the user in different scenes, so as to obtain the difference of the user demands in different scenes. According to the content possibly required by the user in different scenes, determining the content possibly required by the user in the current scene from the preset candidate content, determining the content as recommended content, and pushing the recommended content to the user.
For example, the historical content that the user opens the maximum number of times in different scenes may be determined as the recommended content. Or determining attribute information of the historical content which is opened by the user for the maximum number of times, and selecting the content of the attribute information from preset candidate contents as recommended content. The attribute information may represent a content category, which may include, for example, food, parking lots, travel modes, and the like. If the content of the user which is opened the most times in all scenes is a parking lot, the related information of the parking lot can be recommended to the user no matter what scene the user is currently in. In this embodiment, the content possibly required by the user is determined according to different scenes, so as to obtain the difference of the user requirements under different scenes. The method and the device have the advantages that the content possibly required by the user in different scenes is combined, the commonality of the user requirements in different scenes can be effectively captured, the accuracy of content recommendation is effectively improved, and the user experience is improved.
In the embodiment of the disclosure, when a user is in a certain scene, portrait data and scene sequence data of the user in a period of time can be obtained, and the portrait data in a plurality of scenes are analyzed and processed instead of being limited to the portrait data in the current scene. And determining the commonality and the diversity of the user demands under different scenes, and accurately meeting the real demands of the user under different scenes. The method and the device solve the problem of content recommendation errors caused by analysis of the user demands through a single scene in the prior art, improve the accuracy of content recommendation, and further improve the user experience.
Fig. 2 is a flow chart of a content recommendation method based on multiple scenes according to an embodiment of the present disclosure.
In this embodiment, the recommended content is determined from the preset candidate content based on the portrait data and the scene sequence data, and may be thinned: determining corresponding feature vectors of a user in a plurality of scenes according to the portrait data and the scene sequence data; the feature vector is used for representing one scene in the plurality of scenes and the data corresponding to the scene in the portrait data; and determining recommended content from the preset candidate content according to the feature vectors of the user in all scenes.
As shown in fig. 2, the method comprises the steps of:
s201, obtaining portrait data and scene sequence data of a user; the portrait data represents the historical behavior of the user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period.
S202, determining corresponding feature vectors of a user in a plurality of scenes according to the portrait data and the scene sequence data; the feature vector is used for representing one scene in the plurality of scenes and data corresponding to the scene in the portrait data.
For all the acquired portrait data, the portrait data may be classified according to scenes to obtain data corresponding to each scene in the portrait data. And aiming at different scenes, carrying out feature extraction processing on data corresponding to the scenes in the image data and scene sequence data, extracting vector data in a matrix form, and determining the extracted data as feature vectors of a user in the scenes. That is, feature vectors may be used to characterize the scene and the data in the representation data corresponding to the scene. There are several scenes in the scene sequence data, and several feature vectors can be obtained. For example, if the scene sequence data is [ local, remote, and remote ], it may be determined that there are two scenes, namely local and remote, in the scene sequence data, and two feature vectors may be obtained. For each scene, the corresponding feature vector may be determined separately. That is, when the feature vector is determined, the scenes do not affect each other, and it is ensured that the variability between the scenes can be obtained.
In this embodiment, the feature vector may be determined through a preset feature extraction network, which may be composed of network layers such as a full connection layer. The network structure of the feature extraction network is not particularly limited in this embodiment.
In this embodiment, determining feature vectors corresponding to a user in a plurality of scenes according to portrait data and scene sequence data includes: determining data in each scene from the portrait data as user scene data corresponding to each scene respectively; and determining the characteristic vector of the user in each scene according to the scene data of each user and the scene sequence data.
Specifically, the image data includes data corresponding to different scenes, and the image data is divided to obtain data corresponding to each scene. And determining the data corresponding to each scene in the image data as user scene data of the scene. For example, a period of time in which the user is in each scene may be determined, and data generated in the period of time may be determined from the portrait data as user scene data of the scene.
And determining the feature vector of the user in a certain scene according to the user scene data and the unified scene sequence data of the user in the scene. That is, when determining feature vectors of different scenes, the user scene data is different and the scene sequence data is the same. The user scene data and the scene sequence data can be formed into a vector in a matrix form, and the vector is input into a preset feature extraction network to obtain a feature vector. In this embodiment, a plurality of feature extraction networks may be set for different scenes, so as to implement parallel processing of different scenes. Respective feature extraction networks may be set in advance for scenes, and when feature extraction is required for a user, which scenes are included in scene sequence data is determined, thereby deciding which feature extraction networks to use. For example, the preset feature extraction network includes a network corresponding to a local scene, a network corresponding to a remote scene, a network corresponding to a scene in a row, and a network corresponding to a scene in a non-row. The scene sequence data only contains local scenes and remote scenes, and a network corresponding to the scenes in the row and a network corresponding to the scenes in the non-row are not used.
The method has the advantages that the data of each scene is determined from the total image data, so that the respective feature vectors are obtained for each scene, the independent processing of different scenes is realized, and the accuracy of content recommendation is improved.
In this embodiment, determining data in each scene from the portrait data as user scene data corresponding to each scene, includes: acquiring scene identifiers corresponding to each scene respectively from the portrait data; wherein the scene identifier is used for representing a scene; and determining the data corresponding to each scene in the portrait data according to the scene identification corresponding to each scene respectively, and taking the data corresponding to each scene as the user scene data corresponding to each scene respectively.
Specifically, each piece of data in the portrait data can be marked with a scene identifier, and the scene identifier can be used for representing a scene. The scene identification marked on each piece of data can be the same or different. And acquiring scene identifications marked by all pieces of data from the portrait data, and determining the data marked with the same scene identification. And determining the data marked with the same scene identifier as the data corresponding to the scene in the image data. That is, the data corresponding to each scene in the portrait data is determined based on the scene identifier. And determining the data corresponding to each scene in the portrait data as user scene data of the scene. For example, there are 1000 pieces of data in the portrait data, wherein 300 pieces of data are user scene data in a local scene, and 700 pieces of data are user scene data in a remote scene.
The method has the advantages that according to the scene identification, accurate distinction of scenes can be carried out on the image data, so that accurate feature extraction is carried out on different scenes, confusion of the image data of different scenes is avoided, feature extraction accuracy is improved, and content recommendation accuracy is further improved.
In this embodiment, the method further includes: determining the data quantity of user scene data corresponding to each scene in the scene sequence data; determining a scene to be adjusted according to the difference value of the data quantity between different scenes; wherein, the scene to be adjusted represents the data volume of the user scene data to be adjusted; and adjusting the data quantity of the scene to be adjusted to obtain adjusted user scene data corresponding to the scene to be adjusted.
Specifically, the data amount of the corresponding user scene data may be different in different scenes, for example, 10000 pieces of user scene data of the user in the local scene and 100 pieces of user scene data of the user in the different scenes. The data volume of the large difference can influence the processing efficiency of different scenes on the user scene data, so that the data volume of the user scene data can be processed, and the data volumes of the user scene data of different scenes are balanced.
Before feature extraction processing is performed on the user scene data, the data amount of the user scene data corresponding to each scene is determined. And determining a difference value of data quantity between different scenes, and determining a scene to be adjusted according to the difference value of the data quantity, wherein the representation of the scene to be adjusted needs to adjust the data quantity of the user scene data of the scene. For example, if there is a large difference in data amount between two scenes, the two scenes may be determined as the scenes to be adjusted, or one of the two scenes may be determined as the scene to be adjusted.
The data amount of the scene to be adjusted is adjusted, for example, the data of the user scene with more data amount can be reduced, or the data of the user scene with less data amount can be increased, so as to obtain the adjusted data of the user scene corresponding to the scene to be adjusted, and the difference value of the data amount between the scenes is reduced after the adjustment.
The method has the advantages that the data volume of the user scene data of the scenes is adjusted, the difference of the data volume among different scenes is balanced, the fact that feature vectors corresponding to different scenes are not synchronous in generation is avoided, and the feature extraction efficiency is improved.
In this embodiment, determining a scene to be adjusted according to a difference in data amount between different scenes includes: determining a difference in the amount of data between the different scenes; the different scenes comprise a first scene and a second scene, and the data volume of the user scene data of the first scene is larger than that of the user scene data of the second scene; if the difference is greater than a preset difference threshold, determining the first scene as the scene to be adjusted.
Specifically, for a plurality of scenes in the scene sequence data, a difference in the data amount of the user scene data between every two scenes may be determined. If the difference is larger than a preset difference threshold, determining a scene to be adjusted from the two scenes, and adjusting the data volume of the user scene data of the scene to be adjusted until the difference of the data volume of the user scene data of any two scenes is not larger than the preset difference threshold.
When determining the difference of the data amounts between different scenes, one of the scenes may be determined as a first scene, the other scene may be determined as a second scene, and the data amount of the user scene data of the first scene is larger than the data amount of the user scene data of the second scene, i.e., the scene with more data amount is the first scene. If the difference value of the data amounts is larger than a preset difference value threshold value, determining the first scene as a scene to be adjusted, namely, determining a scene with more data amounts as the first scene.
The method has the advantages that if the difference value of the data quantity is large, the scene with the larger data quantity is determined as the scene to be adjusted, the adjustment of the data quantity is convenient for the scene with the larger data quantity, and the consistency of the data quantity of the user scene data among different scenes is improved.
In this embodiment, data amount adjustment is performed on a scene to be adjusted to obtain adjusted user scene data corresponding to the scene to be adjusted, including: deleting the user scene data of the first scene according to the data volume of the user scene data of the second scene; and determining the user scene data reserved after the deletion processing as adjusted user scene data corresponding to the first scene.
Specifically, after the first scene is determined as the scene to be adjusted, the data amount of the user scene data of the first scene may be processed according to the data amount of the user scene data of the second scene. For example, the data amount of the user scene data of the first scene may be pruned to approximate the data amount of the user scene data of the first scene to the data amount of the user scene data of the second scene.
After deleting the user scene data of the first scene, determining the user scene data reserved after deleting the user scene data as adjusted user scene data corresponding to the first scene. The data amount of the user scene data of the second scene is unchanged. For example, the first scene may have 10000 pieces of user scene data, the second scene may have 100 pieces of user scene data, and after the pruning process is performed, the first scene may have 1000 pieces of user scene data, and the second scene may have 100 pieces of user scene data.
The method has the advantages that the existing user scene data of the first scene are deleted, the user scene data of the second scene is not required to be increased, the processing operation of the data volume is simplified, the adjustment efficiency of the data volume is improved, and the content recommendation efficiency is further improved.
In this embodiment, the deleting processing of the user scene data of the first scene according to the data amount of the user scene data of the second scene includes: determining an average value between the data amount of the user scene data of the first scene and the data amount of the user scene data of the second scene; and deleting the user scene data of the first scene according to the average value, and deleting the user scene data of the first scene to the number of the average value.
Specifically, in the case where it is determined that adjustment of the data amount is required for the user scene data of the first scene, an average value between the data amount of the user scene data of the first scene and the data amount of the user scene data of the second scene is determined. The pruning process is performed on the user scene data of the first scene according to the average value, for example, the user scene data of the first scene may be pruned to the number of average values, thereby obtaining pruned user scene data of the first scene.
The method has the advantages that the data volume of the first scene can be quickly deleted by determining the average value, so that the data volume of the user scene data of the first scene is close to the data volume of the user scene data of the second scene, and the balance processing of the data volumes of the first scene and the second scene is realized.
In this embodiment, a plurality of attention networks are preset, one attention network corresponds to one scene, and the attention network is a neural network structure based on an attention mechanism; determining a feature vector of the user in each scene according to each user scene data and scene sequence data, wherein the feature vector comprises the following components: determining attention networks respectively corresponding to the user scene data according to the scenes represented by the user scene data; and respectively inputting all the user scene data and the scene sequence data into corresponding attention networks as N groups of input data to obtain feature vectors of the user in each scene, wherein N is a positive integer, and each group of input data comprises the scene sequence data and one user scene data.
Specifically, in this embodiment, a network model of a preset feature extraction network may be used to perform feature extraction, where a multi-head attention mechanism may be used in the network model, where the multi-head attention mechanism indicates that there are multiple attention networks, and the attention network is a neural network structure based on the attention mechanism and may be used to perform feature extraction, where one attention network corresponds to one scene. The attention network of each scene is pre-constructed, and the attention network can comprise network layers such as a full connection layer and the like.
And determining user scene data corresponding to each scene in the scene sequence data, and determining an attention network corresponding to the user scene data, namely determining the attention network corresponding to each scene in the scene sequence data. And inputting the scene sequence data and the user scene data corresponding to the scene into an attention network corresponding to the scene, and carrying out feature extraction processing by a network layer in the attention network to obtain feature vectors of the user in the scene. Each attention network can perform characteristic extraction on own input data, and mutual influence is avoided. The user scene data and the scene sequence data in one scene can be used as a group of input data, and if there are N scenes, all the user scene data and the scene sequence data can be used as N groups of input data to be respectively input into the corresponding attention network. Each attention network can output the feature vector under the scene corresponding to the attention network.
The multi-head attention mechanism has the beneficial effects that the multi-head attention mechanism can be better adapted to various different scenes, the feature extraction of the different scenes in one model is realized, the diversified content requirements of users are intelligently and accurately met, and the generalization performance of the model is improved. The multi-head attention mechanism has the advantage of improving the scene distinguishing capability, realizes the high-quality content corresponding to the scene recommendation, and improves the content recommendation precision.
In this embodiment, all the user scene data and the scene sequence data are used as N groups of input data, and are respectively input into the corresponding attention network to obtain feature vectors of the user in each scene, including: performing feature extraction processing on user scene data corresponding to the attention network based on the attention network to obtain a matrix vector of each user scene data; determining the weight of each scene according to the number of scenes in the scene sequence data; wherein the weight of the scene characterizes the importance degree of the scene; and obtaining the feature vector of the user in each scene according to the matrix vector of the scene data of each user and the weight of each scene.
Specifically, for the attention network to which each scene corresponds, the input data is scene sequence data and user scene data. Through the network layer in the attention network, feature extraction can be performed on the user scene data, and the extracted result is a matrix vector corresponding to the user scene data.
Feature extraction can also be performed on scene sequence data, which is all scenes experienced by a user over a period of time. The scene sequence data obtained by each attention network is consistent. The hidden information in the scene sequence data can be deeply mined by adopting the attention network, in this embodiment, the hidden information may refer to weights of each scene in the scene sequence data, the weights may represent importance degrees of the scenes to users, and the greater the weights, the more important the scenes to users are, the more reference is provided to matrix vectors corresponding to the scenes.
The attention network determines the weight of the scene corresponding to the attention network according to the scene sequence data, for example, the scene sequence data is [ local, remote and remote ], and then the attention network which needs to be used for the local scene and the attention network of the remote scene are determined. The local scene weight is determined by the attention network of the local scene, and the remote scene weight is determined by the attention network of the remote scene, so that the local scene weight is smaller, and the remote scene weight is larger.
After the matrix vector and the weight of the scene are obtained through the attention network, the matrix vector and the weight of the scene can be combined to obtain the feature vector of the user in the scene. For example, the matrix vector and the weight of the scene may be spliced to obtain vector data in a matrix form as the feature vector.
The method has the advantages that matrix vectors of different scenes are obtained in a targeted mode through a multi-head attention mechanism, hidden information in scene sequence data is deeply mined, commonalities and differences of interests and demands of users in different scenes can be effectively captured, real demands of the users in the different scenes are efficiently understood and accurately met, and content recommendation accuracy is improved.
S203, determining recommended content from preset candidate content according to the feature vectors of the user in all scenes, and recommending the recommended content to the user.
Illustratively, from the portrait data and the scene sequence data, a feature vector in each scene in the scene sequence data may be determined. And determining the content most likely to be needed by the user from preset candidate contents according to the feature vectors in all scenes in the scene sequence data, and pushing the recommended content to the user as the recommended content.
The recommendation probability of each candidate content can be determined according to the feature vector, and the recommendation probability can represent the requirement degree of the user on the candidate content. The greater the recommendation probability of the candidate content, the more likely the user needs the candidate content, and the candidate content with the highest recommendation probability can be determined as the recommended content. For example, the candidate content is a recommended history content, the recommendation probability of each history content may be determined based on the history content represented in the feature vector, and the history content with the highest recommendation probability may be determined as the recommended content.
For each scene, a feature vector can be obtained, the targeted processing of different scenes is realized, and the difference between the different scenes is determined. And combining the feature vectors of all scenes to obtain final recommended content, determining the commonality among different scenes, realizing high-efficiency understanding and accurately meeting the real requirements of users, and improving the accuracy of content recommendation.
In the embodiment of the disclosure, when a user is in a certain scene, portrait data and scene sequence data of the user in a period of time can be obtained, and the portrait data in a plurality of scenes are analyzed and processed instead of being limited to the portrait data in the current scene. And determining the commonality and the diversity of the user demands under different scenes, and accurately meeting the real demands of the user under different scenes. The method and the device solve the problem of content recommendation errors caused by analysis of the user demands through a single scene in the prior art, improve the accuracy of content recommendation, and further improve the user experience.
Fig. 3 is a flowchart of a content recommendation method based on multiple scenes according to an embodiment of the present disclosure.
In this embodiment, according to the feature vectors of the user in all the scenes, the recommended content is determined from the preset candidate content, and is refined as follows: determining the recommendation probability of the historical content corresponding to the scene according to the feature vector of the user in each scene; and determining recommended content from the preset candidate content according to the recommendation probability of the historical content corresponding to all the scenes.
As shown in fig. 3, the method comprises the steps of:
s301, obtaining portrait data and scene sequence data of a user; the portrait data represents the historical behavior of the user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period.
S302, determining a feature vector of a user in a scene according to the portrait data and the scene sequence data; the feature vector is used for representing data in a corresponding scene in the scene and the portrait data.
S303, determining the recommendation probability of the historical content corresponding to each scene according to the feature vector of the user in each scene.
For example, after determining the feature vector of the user in each scene, the recommendation probability of each history content may be determined for the history content in that scene. Each scene is referred to herein as a scene contained in scene sequence data, and the history content under the scene is referred to as content recommended to the user when the user is in the scene for a preset period of time.
The feature vector may include a matrix vector corresponding to the user scene data and a weight of the scene, that is, the recommendation probability of the historical content may be determined according to the matrix vector of the user scene data and the weight of the scene. For example, data located in a preset row and column may be extracted from the matrix vector, the extracted data and the weights are calculated in a preset manner, and the calculation result is a recommendation probability. The greater the probability of recommendation, the more interesting the user is to the history. In this embodiment, the calculation method of the recommendation probability is not particularly limited.
In this embodiment, determining, according to a feature vector of a user in each scene, a recommendation probability of historical content corresponding to each scene includes: inputting the feature vector under each scene into a preset fully-connected neural network; and carrying out feature extraction processing on the feature vector under each scene through a preset fully-connected neural network to obtain the recommendation probability of the historical content corresponding to each scene recommended to the user.
Specifically, the network structure of the FCN (Fully Connected Network, fully connected neural network) is preset, and the feature vectors under each scene obtained in step S302 are input into the FCN network, that is, the input of the FCN may be the output of the multi-head attention mechanism. There are several scenes in the scene sequence data and there are several eigenvectors input to the FCN. The network structure of the FCN may include a network layer such as a full connection layer, which may be used for extracting features, and in this embodiment, the network structure of the FCN is not specifically limited.
After a plurality of feature vectors are input into the FCN, feature extraction can be performed on the feature vectors based on the FCN, and the FCN can output recommendation probabilities of various historical contents. For one scene, there may be a plurality of history contents corresponding to each scene, and a recommendation probability of the plurality of history contents corresponding to each scene may be obtained, i.e., for each history content, a recommendation probability may be obtained.
The beneficial effect of this setting is that the data that the attention network output of all scenes is input to the FCN, realizes the processing to the feature vector of different scenes through a total FCN network, satisfies the recommendation demand under different scenes.
In this embodiment, feature extraction processing is performed on feature vectors in each scene through a preset fully connected neural network to obtain recommendation probabilities of historical contents corresponding to user recommendation scenes, including: carrying out convolution processing on matrix vectors of a user in each scene through a preset fully-connected neural network to obtain initial probability of historical content corresponding to each scene recommended to the user; and determining the recommendation probability of the historical content corresponding to each scene recommended by the user according to the weight of each scene and each initial probability.
Specifically, feature extraction processing is performed on the feature vector through the FCN, which may be convolution processing performed on a matrix vector of the user scene data in the feature vector. The matrix vector of the user scene data may include history contents recommended for the user, and a probability may be calculated for each history content by performing a convolution process, and the calculated probability may be determined as an initial probability. The initial probability may represent a likelihood of making a recommendation to the user without regard to scene weights.
After obtaining the initial probability of the historical content under each scene, the recommendation probability of the historical content corresponding to one scene can be calculated according to the initial probability of the scene and the weight of the scene. For example, the initial probability and the weight of the scene may be multiplied to obtain the recommended probability.
The method has the beneficial effects that the recommendation probability of each historical content is determined by combining the hidden information in the scene sequence data, namely the weight of the scene, so that the accurate determination of the recommendation probability under a plurality of scenes is realized, and the content recommendation precision is further improved.
S304, determining recommended content from preset candidate content according to the recommendation probability of the historical content corresponding to all scenes, and recommending the recommended content to the user.
For example, a plurality of candidate contents are preset, and one recommended content is determined from the candidate contents according to recommendation probabilities of the historical contents corresponding to all scenes. For example, if the candidate content includes a history content, the history content with the highest recommendation probability may be selected as the recommendation content; if the candidate content does not include the history content, the history content with the highest recommendation probability may be determined first, then the content with the highest correlation with the history content may be determined from the candidate content, and the content may be determined as the content with the highest recommendation probability in the candidate content, that is, the content may be determined as the recommended content. After the recommended content is obtained, the recommended content may be recommended to the user, e.g., the recommended content may be displayed on an interface of the application product.
The recommendation probability of the historical content in different scenes is combined, the recommendation content which is finally recommended to the user is determined, the gap between the different scenes is broken, the content recommendation can be carried out for the user no matter what scene the user is in, the recommendation content is prevented from being too single, the recommendation requirements in the different scenes are met, the content recommendation precision is improved, and the user experience is improved.
In this embodiment, the pre-device selection content is recommended history content; according to the recommendation probability of the historical content corresponding to all scenes, determining recommended content from preset candidate content comprises the following steps: sequencing the recommendation probability of the historical content corresponding to all scenes; and determining the historical content corresponding to the recommendation probability at the preset sequencing position as the recommended content.
Specifically, the preset candidate content may be recommended historical content, and after the recommendation probability of each historical content is obtained, the recommendation probabilities may be ranked. For example, the ordering may be in order of from large to small. And according to the sorting result, determining the historical content corresponding to the recommendation probability at the preset sorting position, and determining the historical content as the recommended content. For example, the history content ranked first may be determined as recommended content, that is, the history content having the highest recommendation probability may be determined as recommended content.
The recommendation method has the advantages that the recommendation content can be rapidly determined from the historical content through the recommendation probability, so that the recommendation content is what the user has clicked and watched, namely, the user is ensured to have interest in the recommendation content, and the user experience is effectively improved.
In this embodiment, the preset device selects all the contents in the preset database; according to the recommendation probability of the historical content corresponding to all scenes, determining recommended content from preset candidate content comprises the following steps: sequencing the recommendation probability of the historical content corresponding to all scenes; determining historical contents corresponding to the recommendation probability at a preset sequencing position as interesting contents; determining content category of the interested content, searching content of the content category from preset candidate content, and determining the content of the content category as recommended content.
Specifically, the alternative content may be other content than the history content. After the recommendation probabilities of the respective history contents are obtained, the recommendation probabilities of all the history contents are ranked, for example, the ranking may be performed in order from large to small. The historical content corresponding to the recommendation probability located at the preset ranking position is determined, for example, the historical content corresponding to the maximum recommendation probability can be determined, and the historical content is determined to be the interested content. The content of interest is the content of most interest to the user among the history contents, and is not recommended content. Recommended content may be determined from among the candidate content based on the content of interest.
Determining a content category of the content of interest, searching the content of the content category from the pre-equipment selected content, and determining the content of the content category as recommended content. If the candidate content includes a plurality of contents of the content category, one content may be randomly selected from the contents of the content category as the recommended content. A recommended content may also be determined from the plurality of content of the content category based on the current location of the user. For example, if the content category is information related to a parking lot, the current location of the user may be acquired, and the parking lot information near the current location may be recommended to the user.
The beneficial effects of the arrangement are that the content which best meets the user requirement can be selected from a large number of contents, the range of the alternative content is enlarged, the universality of content recommendation is improved, the limitation of recommended content is avoided, the user requirement is favorably met, and the user experience is improved.
Fig. 4 is a schematic diagram of connection relationships of a network structure according to an embodiment of the disclosure. The network structure of the model used in determining the recommended content may include a multi-head attention mechanism and an FCN, and a plurality of attention networks may be included in the multi-head attention mechanism. In fig. 4, the input data of scene 1 is scene sequence data and user scene data of scene 1, the input data of scene 2 is scene sequence data and user scene data of scene 2, and the input data of scene 3 is scene sequence data and user scene data of scene 3. In the multi-head attention mechanism, n attention networks are provided, and since scene sequence data only includes scene 1, scene 2 and scene 3, only the attention network of scene 1, the attention network of scene 2 and the attention network of scene 3 are used. Inputting the input data of the scene 1 into an attention network of the scene 1 to obtain a first feature vector; inputting the input data of the scene 2 into an attention network of the scene 2 to obtain a second feature vector; the input data of scene 3 is input into the attention network of scene 3, resulting in a third feature vector. Each attention network shares one FCN, that is, the first feature vector, the second feature vector and the third feature vector are input into the FCN together, and finally recommended content is obtained. In the embodiment, by using a multi-head attention mechanism and an FCN, content recommendation is performed on different scenes through one model, and the resource consumption of a system is effectively saved.
In the embodiment of the disclosure, when a user is in a certain scene, portrait data and scene sequence data of the user in a period of time can be obtained, and the portrait data in a plurality of scenes are analyzed and processed instead of being limited to the portrait data in the current scene. And determining the commonality and the diversity of the user demands under different scenes, and accurately meeting the real demands of the user under different scenes. The method and the device solve the problem of content recommendation errors caused by analysis of the user demands through a single scene in the prior art, improve the accuracy of content recommendation, and further improve the user experience.
Fig. 5 is a block diagram of a content recommendation device based on multiple scenes according to an embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 5, the multi-scene based content recommendation apparatus 500 includes: an acquisition unit 501 and a recommendation unit 502.
An acquisition unit 501 for acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in a scene within a preset time period;
A recommending unit 502, configured to determine recommended content from preset candidate content according to the portrait data and the scene sequence data, and recommend the recommended content to a user.
Fig. 6 is a block diagram of a multi-scenario-based content recommendation device according to an embodiment of the present disclosure, and as shown in fig. 6, a multi-scenario-based content recommendation device 600 includes an acquisition unit 601 and a recommendation unit 602, where the recommendation unit 602 includes a first determination module 6021 and a second determination module 6022.
A first determining module 6021, configured to determine feature vectors corresponding to the user in a plurality of scenes according to the portrait data and the scene sequence data; the feature vector is used for representing one scene in a plurality of scenes and the data corresponding to the scene in the portrait data;
and a second determining module 6022, configured to determine recommended content from the preset candidate content according to the feature vectors of the user in all scenes.
In one example, the first determination module 6021 includes:
the data determining submodule is used for determining data in each scene from the portrait data and taking the data as user scene data corresponding to each scene respectively;
And the characteristic determining submodule is used for determining the characteristic vector of the user in each scene according to the scene data and the scene sequence data of each user.
In one example, the data determination submodule is specifically configured to:
acquiring a scene identifier corresponding to each scene from the portrait data; wherein the scene identifier is used for representing a scene;
and determining the data corresponding to each scene in the portrait data according to the scene identifier corresponding to each scene, and taking the data corresponding to each scene as the user scene data corresponding to each scene.
In one example, further comprising:
a data amount determining unit, configured to determine a data amount of user scene data corresponding to each scene in the scene sequence data;
the scene determining unit is used for determining a scene to be adjusted according to the difference value of the data quantity among different scenes; wherein, the scene to be adjusted represents the data volume of the user scene data to be adjusted;
and the data quantity adjusting unit is used for adjusting the data quantity of the scene to be adjusted to obtain adjusted user scene data corresponding to the scene to be adjusted.
In one example, a scene determination unit includes:
the difference value determining module is used for determining the difference value of the data quantity between different scenes; the different scenes comprise a first scene and a second scene, and the data volume of the user scene data of the first scene is larger than that of the user scene data of the second scene;
and the difference comparison module is used for determining the first scene as the scene to be adjusted if the difference is larger than a preset difference threshold value.
In one example, a data amount adjustment unit includes:
the deleting sub-module is used for deleting the user scene data of the first scene according to the data volume of the user scene data of the second scene;
and the adjustment completion sub-module is used for determining the user scene data which is reserved after the deletion processing as the adjusted user scene data corresponding to the first scene.
In one example, the puncturing sub-module is specifically configured to:
determining an average value between the data amount of the user scene data of the first scene and the data amount of the user scene data of the second scene;
and deleting the user scene data of the first scene according to the average value, and deleting the user scene data of the first scene to the number of the average value.
In one example, a plurality of attention networks are preset, one attention network corresponds to one scene, and the attention network is a neural network structure based on an attention mechanism; the feature determination sub-module includes:
a network determining sub-module, configured to determine, according to a scene represented by each piece of user scene data, an attention network corresponding to each piece of user scene data;
and the data input sub-module is used for respectively inputting all the user scene data and the scene sequence data into the corresponding attention network as N groups of input data to obtain the feature vector of the user in each scene, wherein N is a positive integer, and each group of input data comprises the scene sequence data and one user scene data.
In one example, the data input sub-module is specifically configured to:
performing feature extraction processing on user scene data corresponding to the attention network based on the attention network to obtain a matrix vector of each user scene data;
determining the weight of each scene according to the number of the scenes in the scene sequence data; wherein the weight of the scene characterizes the importance degree of the scene;
And obtaining the characteristic vector of the user under each scene according to the matrix vector of each user scene data and the weight of each scene.
In one example, the second determination module 6022 includes:
the probability determination submodule is used for determining the recommendation probability of the historical content corresponding to each scene according to the feature vector of the user in each scene;
and the content determining sub-module is used for determining the recommended content from the preset candidate content according to the recommendation probability of the historical content corresponding to all the scenes.
In one example, the probability determination submodule includes:
the vector input sub-module is used for inputting the characteristic vector under each scene into a preset fully-connected neural network;
and the probability output sub-module is used for carrying out feature extraction processing on the feature vector under each scene through the preset fully-connected neural network to obtain the recommendation probability of the historical content corresponding to each scene recommended to the user.
In one example, the probability output submodule is specifically configured to:
performing convolution processing on the matrix vector of the user in each scene through the preset fully-connected neural network to obtain initial probability of historical content corresponding to each scene recommended to the user;
And determining the recommendation probability of the historical content corresponding to each scene recommended by the user according to the weight of each scene and each initial probability.
In one example, the pre-device selected content is recommended historical content; the content determination submodule is specifically configured to:
sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
and determining the historical content corresponding to the recommendation probability at the preset sequencing position as the recommended content.
In one example, the pre-device selects content as all content in a pre-set database; the content determination submodule is specifically configured to:
sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
determining historical contents corresponding to the recommendation probability at a preset sequencing position as interesting contents;
determining the content category of the interesting content, searching the content of the content category from the preset candidate content, and determining the content of the content category as the recommended content.
According to an embodiment of the disclosure, the disclosure further provides an electronic device.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the disclosure, and as shown in fig. 7, an electronic device 700 includes: at least one processor 702; and a memory 701 communicatively coupled to the at least one processor 702; wherein the memory stores instructions executable by the at least one processor 702 to enable the at least one processor 702 to perform the multi-scenario based content recommendation method of the present disclosure.
The electronic device 700 further comprises a receiver 703 and a transmitter 704. The receiver 703 is configured to receive instructions and data transmitted from other devices, and the transmitter 704 is configured to transmit instructions and data to external devices.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, such as a multi-scene-based content recommendation method. For example, in some embodiments, the multi-scenario based content recommendation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the multi-scenario based content recommendation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the multi-scenario based content recommendation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (33)
1. A multi-scene based content recommendation method, comprising:
acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in the scene of the preset time period;
And determining recommended content from preset candidate content according to the portrait data and the scene sequence data, and recommending the recommended content to a user.
2. The method of claim 1, wherein the determining recommended content from preset candidate content based on the portrait data and the scene sequence data comprises:
determining corresponding feature vectors of the user under a plurality of scenes according to the portrait data and the scene sequence data; the feature vector is used for representing one scene in a plurality of scenes and data corresponding to the scene in the portrait data;
and determining recommended content from the preset candidate content according to the feature vectors of the user in all scenes.
3. The method of claim 2, wherein the determining, from the representation data and the scene sequence data, the feature vectors corresponding to the user in the plurality of scenes comprises:
determining data under each scene from the portrait data as user scene data corresponding to each scene respectively;
and determining the characteristic vector of the user in each scene according to each user scene data and the scene sequence data.
4. A method according to claim 3, wherein said determining data in each of said scenes from said representation data as user scene data corresponding to each of said scenes respectively, comprises:
acquiring a scene identifier corresponding to each scene from the portrait data; wherein the scene identifier is used for representing a scene;
and determining the data corresponding to each scene in the portrait data according to the scene identifier corresponding to each scene, and taking the data corresponding to each scene as the user scene data corresponding to each scene.
5. The method of claim 3 or 4, further comprising:
determining the data quantity of user scene data corresponding to each scene in the scene sequence data;
determining a scene to be adjusted according to the difference value of the data quantity between different scenes; wherein, the scene to be adjusted represents the data volume of the user scene data to be adjusted;
and adjusting the data quantity of the scene to be adjusted to obtain adjusted user scene data corresponding to the scene to be adjusted.
6. The method of claim 5, wherein the determining the scene to be adjusted based on the difference in data amounts between different scenes comprises:
Determining a difference in the amount of data between the different scenes; the different scenes comprise a first scene and a second scene, and the data volume of the user scene data of the first scene is larger than that of the user scene data of the second scene;
and if the difference value is larger than a preset difference value threshold value, determining the first scene as the scene to be adjusted.
7. The method of claim 6, wherein the adjusting the data amount of the to-be-adjusted scene to obtain adjusted user scene data corresponding to the to-be-adjusted scene includes:
performing deletion processing on the user scene data of the first scene according to the data volume of the user scene data of the second scene;
and determining the user scene data which is reserved after the deletion processing as the adjusted user scene data corresponding to the first scene.
8. The method of claim 7, wherein the puncturing of the user scene data of the first scene according to the data amount of the user scene data of the second scene comprises:
determining an average value between the data amount of the user scene data of the first scene and the data amount of the user scene data of the second scene;
And deleting the user scene data of the first scene according to the average value, and deleting the user scene data of the first scene to the number of the average value.
9. The method according to any one of claims 3-8, wherein a plurality of attention networks are preset, one attention network corresponds to one scene, and the attention network is a neural network structure based on an attention mechanism; the determining the feature vector of the user in each scene according to each user scene data and the scene sequence data comprises the following steps:
determining attention networks respectively corresponding to the user scene data according to the scenes represented by the user scene data;
and respectively inputting all the user scene data and the scene sequence data into corresponding attention networks as N groups of input data to obtain feature vectors of the user in each scene, wherein N is a positive integer, and each group of input data comprises the scene sequence data and one user scene data.
10. The method according to claim 9, wherein said inputting all of the user scene data and the scene sequence data as N sets of input data into corresponding attention networks, respectively, to obtain feature vectors of the user in each of the scenes, comprises:
Performing feature extraction processing on user scene data corresponding to the attention network based on the attention network to obtain a matrix vector of each user scene data;
determining the weight of each scene according to the number of the scenes in the scene sequence data; wherein the weight of the scene characterizes the importance degree of the scene;
and obtaining the characteristic vector of the user under each scene according to the matrix vector of each user scene data and the weight of each scene.
11. The method of claim 10, wherein the determining recommended content from the preset candidate content according to the feature vectors of the user in all scenes comprises:
determining the recommendation probability of the historical content corresponding to each scene according to the feature vector of the user in each scene;
and determining the recommended content from the preset candidate content according to the recommended probability of the historical content corresponding to all the scenes.
12. The method of claim 11, wherein the determining the recommendation probability of the historical content corresponding to each scene according to the feature vector of the user in each scene comprises:
Inputting the feature vector under each scene into a preset fully-connected neural network;
and carrying out feature extraction processing on the feature vector under each scene through the preset fully-connected neural network to obtain the recommendation probability of the historical content corresponding to each scene recommended to the user.
13. The method of claim 12, wherein the performing feature extraction processing on the feature vector in each scene through the preset fully connected neural network to obtain a recommendation probability of the historical content corresponding to each scene recommended to the user, includes:
performing convolution processing on the matrix vector of the user in each scene through the preset fully-connected neural network to obtain initial probability of historical content corresponding to each scene recommended to the user;
and determining the recommendation probability of the historical content corresponding to each scene recommended by the user according to the weight of each scene and each initial probability.
14. The method of any of claims 11-13, wherein the pre-device selected content is recommended historical content; and determining recommended content from the preset candidate content according to the recommendation probability of the historical content corresponding to all the scenes, wherein the method comprises the following steps:
Sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
and determining the historical content corresponding to the recommendation probability at the preset sequencing position as the recommended content.
15. The method of any of claims 11-13, wherein the pre-device selected content is all content in a pre-set database; and determining recommended content from the preset candidate content according to the recommendation probability of the historical content corresponding to all the scenes, wherein the method comprises the following steps:
sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
determining historical contents corresponding to the recommendation probability at a preset sequencing position as interesting contents;
determining the content category of the interesting content, searching the content of the content category from the preset candidate content, and determining the content of the content category as the recommended content.
16. A multi-scene based content recommendation apparatus comprising:
the acquisition unit is used for acquiring portrait data and scene sequence data of a user; the portrait data represents the historical behavior of a user, and the scene sequence data represents a plurality of scenes experienced by the user in a preset time period; the historical behavior characterizes the behavior of the user on the recommended historical content in the scene of the preset time period;
And the recommending unit is used for determining recommended content from preset candidate content according to the portrait data and the scene sequence data and recommending the recommended content to a user.
17. The apparatus of claim 16, wherein the recommendation unit comprises:
the first determining module is used for determining corresponding feature vectors of the user under a plurality of scenes according to the portrait data and the scene sequence data; the feature vector is used for representing one scene in a plurality of scenes and data corresponding to the scene in the portrait data;
and the second determining module is used for determining recommended content from the preset candidate content according to the feature vectors of the user in all scenes.
18. The apparatus of claim 17, wherein the first determination module comprises:
the data determining submodule is used for determining data in each scene from the portrait data and taking the data as user scene data corresponding to each scene respectively;
and the feature determination submodule is used for determining feature vectors of the user in each scene according to the scene data of each user and the scene sequence data.
19. The apparatus of claim 18, wherein the data determination submodule is configured to:
acquiring a scene identifier corresponding to each scene from the portrait data; wherein the scene identifier is used for representing a scene;
and determining the data corresponding to each scene in the portrait data according to the scene identifier corresponding to each scene, and taking the data corresponding to each scene as the user scene data corresponding to each scene.
20. The apparatus of claim 18 or 19, further comprising:
a data amount determining unit, configured to determine a data amount of user scene data corresponding to each scene in the scene sequence data;
the scene determining unit is used for determining a scene to be adjusted according to the difference value of the data quantity among different scenes; wherein, the scene to be adjusted represents the data volume of the user scene data to be adjusted;
and the data quantity adjusting unit is used for adjusting the data quantity of the scene to be adjusted to obtain adjusted user scene data corresponding to the scene to be adjusted.
21. The apparatus of claim 20, wherein the scene determination unit comprises:
The difference value determining module is used for determining the difference value of the data quantity between different scenes; the different scenes comprise a first scene and a second scene, and the data volume of the user scene data of the first scene is larger than that of the user scene data of the second scene;
and the difference comparison module is used for determining the first scene as the scene to be adjusted if the difference is larger than a preset difference threshold value.
22. The apparatus of claim 21, wherein the data amount adjustment unit comprises:
the deleting sub-module is used for deleting the user scene data of the first scene according to the data volume of the user scene data of the second scene;
and the adjustment completion sub-module is used for determining the user scene data which is reserved after the deletion processing as the adjusted user scene data corresponding to the first scene.
23. The apparatus of claim 22, wherein the pruning sub-module is specifically configured to:
determining an average value between the data amount of the user scene data of the first scene and the data amount of the user scene data of the second scene;
and deleting the user scene data of the first scene according to the average value, and deleting the user scene data of the first scene to the number of the average value.
24. The apparatus according to any one of claims 18-23, wherein a plurality of attention networks are preset, one attention network corresponding to one scene, the attention network being a neural network structure based on an attention mechanism; the feature determination sub-module includes:
a network determining sub-module, configured to determine, according to a scene represented by each piece of user scene data, an attention network corresponding to each piece of user scene data;
and the data input sub-module is used for respectively inputting all the user scene data and the scene sequence data into the corresponding attention network as N groups of input data to obtain the feature vector of the user in each scene, wherein N is a positive integer, and each group of input data comprises the scene sequence data and one user scene data.
25. The apparatus of claim 24, wherein the data input sub-module is specifically configured to:
performing feature extraction processing on user scene data corresponding to the attention network based on the attention network to obtain a matrix vector of each user scene data;
determining the weight of each scene according to the number of the scenes in the scene sequence data; wherein the weight of the scene characterizes the importance degree of the scene;
And obtaining the characteristic vector of the user under each scene according to the matrix vector of each user scene data and the weight of each scene.
26. The apparatus of claim 25, wherein the second determination module comprises:
the probability determination submodule is used for determining recommendation probability of the historical content corresponding to each scene according to the feature vector of the user in each scene;
and the content determining sub-module is used for determining the recommended content from the preset candidate content according to the recommendation probability of the historical content corresponding to all the scenes.
27. The apparatus of claim 26, wherein the probability determination submodule comprises:
the vector input sub-module is used for inputting the characteristic vector under each scene into a preset fully-connected neural network;
and the probability output sub-module is used for carrying out feature extraction processing on the feature vector under each scene through the preset fully-connected neural network to obtain the recommendation probability of the historical content corresponding to each scene recommended to the user.
28. The apparatus of claim 27, wherein the probability output submodule is configured to:
Performing convolution processing on the matrix vector of the user in each scene through the preset fully-connected neural network to obtain initial probability of historical content corresponding to each scene recommended to the user;
and determining the recommendation probability of the historical content corresponding to each scene recommended by the user according to the weight of each scene and each initial probability.
29. The apparatus of any of claims 26-28, wherein the pre-device selected content is recommended historical content; the content determination submodule is specifically configured to:
sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
and determining the historical content corresponding to the recommendation probability at the preset sequencing position as the recommended content.
30. The apparatus of any of claims 26-28, wherein the pre-device selected content is all content in a pre-set database; the content determination submodule is specifically configured to:
sequencing the recommendation probabilities of the historical contents corresponding to all the scenes;
determining historical contents corresponding to the recommendation probability at a preset sequencing position as interesting contents;
Determining the content category of the interesting content, searching the content of the content category from the preset candidate content, and determining the content of the content category as the recommended content.
31. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-15.
32. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-15.
33. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311684700.2A CN117633360A (en) | 2023-12-08 | 2023-12-08 | Multi-scene-based content recommendation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311684700.2A CN117633360A (en) | 2023-12-08 | 2023-12-08 | Multi-scene-based content recommendation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117633360A true CN117633360A (en) | 2024-03-01 |
Family
ID=90024988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311684700.2A Pending CN117633360A (en) | 2023-12-08 | 2023-12-08 | Multi-scene-based content recommendation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117633360A (en) |
-
2023
- 2023-12-08 CN CN202311684700.2A patent/CN117633360A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113301442B (en) | Method, device, medium, and program product for determining live broadcast resource | |
CN114036398B (en) | Content recommendation and ranking model training method, device, equipment and storage medium | |
CN113656698B (en) | Training method and device for interest feature extraction model and electronic equipment | |
KR102601545B1 (en) | Geographic position point ranking method, ranking model training method and corresponding device | |
CN112765452A (en) | Search recommendation method and device and electronic equipment | |
CN113792212A (en) | Multimedia resource recommendation method, device, equipment and storage medium | |
CN113326450B (en) | Point-of-interest recall method and device, electronic equipment and storage medium | |
CN114461919A (en) | Information recommendation model training method and device | |
CN112541123A (en) | Map region recommendation method, device, equipment and storage medium | |
CN117407584A (en) | Method, device, electronic equipment and storage medium for determining recommended content | |
CN117633360A (en) | Multi-scene-based content recommendation method and device, electronic equipment and storage medium | |
CN114817743A (en) | Interest point searching method and device | |
CN114756753A (en) | Product recommendation method and device, electronic equipment and storage medium | |
CN113434432A (en) | Performance test method, device, equipment and medium for recommendation platform | |
CN112906994A (en) | Order meal delivery time prediction method and device, electronic equipment and storage medium | |
CN114547417B (en) | Media resource ordering method and electronic equipment | |
CN114547448B (en) | Data processing method, model training method, device, equipment, storage medium and program | |
CN117688470A (en) | Content push determining method, model training method and device and electronic equipment | |
CN117421476A (en) | Information recommendation method and device, electronic equipment and storage medium | |
CN117763218A (en) | Search ordering method and system, click rate pre-estimated neural network and training method | |
CN114724003A (en) | Training method of image feature extraction model, image retrieval method and device | |
CN117390276A (en) | Content recommendation method, device, equipment, computer readable storage medium and product | |
CN115186181A (en) | Virtual asset recommendation method and device, electronic equipment and storage medium | |
CN113190779A (en) | Webpage evaluation method and device | |
CN118839061A (en) | Resource recommendation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |