WO2021114936A1 - 信息推荐方法、装置、电子设备及计算机可读存储介质 - Google Patents

信息推荐方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021114936A1
WO2021114936A1 PCT/CN2020/124765 CN2020124765W WO2021114936A1 WO 2021114936 A1 WO2021114936 A1 WO 2021114936A1 CN 2020124765 W CN2020124765 W CN 2020124765W WO 2021114936 A1 WO2021114936 A1 WO 2021114936A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
information
emotional
user
sequence
Prior art date
Application number
PCT/CN2020/124765
Other languages
English (en)
French (fr)
Inventor
陈向军
刘璐
吴饶金
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021114936A1 publication Critical patent/WO2021114936A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Definitions

  • This application relates to the field of artificial intelligence (AI) for electronic equipment, specifically related to the field of information processing, and in particular to information recommendation methods, devices, electronic equipment, and computer-readable storage media.
  • AI artificial intelligence
  • the embodiments of the present application provide information recommendation methods, devices, electronic equipment, and computer-readable storage medium information, which can improve the accuracy of information recommendation.
  • an information recommendation method including:
  • the recommended value of the information is determined based on the emotional characteristic information and the scene characteristic information, and the top L pieces of information with the highest recommended value are recommended to the user, where L is an integer greater than zero.
  • the emotional feature information is the feature information that can feed back the user's current emotions
  • the scene feature information is the feature information that can feed back the user's current environment, that is, the physical entity environment.
  • obtaining the user's current emotional characteristic information and the scene characteristic information of the environment in which the user is currently located includes:
  • M groups of user images corresponding to the user preset behavior are acquired, where M is an integer greater than zero;
  • the M groups of second images are input into a preset second neural network model for processing to obtain first scene feature information.
  • the user preset behavior is specifically the behavior of the user of the electronic device clicking a preset link and browsing related content.
  • the preset links referred to here include, but are not limited to, URL links and video links of the currently browsed page.
  • the scene feature information can be obtained through the recording sensor to synchronously record the sound background of the user's environment, and combine the sound background data and image background data to perfect the scene feature information of the user's environment. In this way, it is possible to recommend information suitable for the user's individual needs.
  • the recommended information can be more matched with the user's current mood and the environment in which the user is in, thereby improving the recommendation effect and making the user experience better.
  • inputting the M groups of first images into a preset first neural network model for processing to obtain emotional feature information includes:
  • the negative character sequence is used as a set of emotional character sequence.
  • each first image corresponds to a feature column with a dimension of 1*N
  • a group of first images corresponds to a group of feature columns with a dimension of 1*N.
  • the positive sentiment value and the negative sentiment value reflect the extreme emotional characteristics of the user.
  • a set of emotional characteristic series obtained from a set of user images corresponding to the user's preset behavior can accurately reflect the user's current emotions.
  • the inputting the M groups of first images into a preset first neural network model for processing, and obtaining emotional feature information further includes:
  • the inputting the M groups of first images into a preset first neural network model for processing, and obtaining emotional feature information includes:
  • 2P feature columns are taken from the aggregated feature column, and the feature column with the farthest distance from the center of gravity forms a dimension (2P+1)*N A set of emotional feature series, and a set of emotional feature series with the dimension (2P+1)*N as the emotional feature information, and the P is an integer greater than zero.
  • the 2P feature rows are selected from the aggregated feature rows according to the feature row that is the farthest from the center of gravity, and the feature row that is the farthest from the center of gravity is selected.
  • a feature sequence composes a set of emotional feature sequence with dimension (2P+1)*N including:
  • P are selected from the feature rows before and after the feature row farthest from the center of gravity.
  • the feature sequence and the feature sequence farthest from the center of gravity form a group of emotional feature sequence with a dimension of (2P+1)*N.
  • the 2P feature rows are selected from the aggregated feature rows according to the feature row that is the farthest from the center of gravity, and the feature row that is the farthest from the center of gravity is selected.
  • a feature sequence composes a set of emotional feature sequence with dimension (2P+1)*N including:
  • the 2P feature rows are selected from the aggregated feature rows according to the feature row that is the farthest from the center of gravity, and the row that is the farthest from the center of gravity is selected.
  • a feature sequence composes a set of emotional feature sequence with dimension (2P+1)*N including:
  • the 2P feature rows are selected from the aggregated feature rows according to the feature row that is the farthest from the center of gravity, and the feature row that is the farthest from the center of gravity is selected.
  • a feature sequence composes a set of emotional feature sequence with dimension (2P+1)*N including:
  • the feature column with the farthest distance from the center of gravity is the first feature column in the aggregated feature column
  • 2P feature columns are selected from the feature columns after the feature column with the farthest distance from the center of gravity
  • a feature sequence with the farthest distance from the center of gravity form a group of emotional feature sequence with a dimension of (2P+1)*N.
  • the 2P feature rows are selected from the aggregated feature rows according to the feature row that is the farthest from the center of gravity, and the feature row that is the farthest from the center of gravity is selected.
  • a feature sequence composes a set of emotional feature sequence with dimension (2P+1)*N including:
  • the feature column with the farthest distance from the center of gravity is the last feature column in the aggregated feature column, then 2P feature columns are selected from the feature columns before the feature column with the farthest distance from the center of gravity, And a feature sequence with the farthest distance from the center of gravity to form a set of emotional feature sequence with a dimension of (2P+1)*N.
  • acquiring a set of user images corresponding to the user preset behavior further includes:
  • the audio information and/or video information are input into a preset third neural network model for processing to obtain second scene feature information.
  • the recommended value of the information is determined based on the emotional feature information and the scene feature information, and the top L information with the highest recommended value is recommended to the user based on the emotional feature information and the scene feature Information optimizing the ranking of the information flow and recommending the ranked information flow to the user includes:
  • the recommended value of the information is determined based on the emotional characteristic information, the first scene characteristic information, and the second scene characteristic information, and the top L pieces of information with the highest recommended value are recommended to the user.
  • the acquiring information associated with the emotional characteristic information includes:
  • the recommendation value of the information is determined based on the emotion characteristic information and the scene characteristic information, and the top L pieces of information with the highest recommendation value are recommended to the user , Said L being an integer greater than zero includes:
  • the emotional feature information and the scene feature information are spliced in a time window, the recommended value of the information is determined according to the splicing result, and the top L information with the highest recommended value is recommended to the user
  • an information recommendation device including:
  • the feature information acquiring unit is used to acquire the emotional feature information of the user and the scene feature information of the environment where the user is located;
  • An information acquisition unit for acquiring information associated with the emotional characteristic information
  • the information recommendation unit is configured to determine the recommended value of the information based on the emotional feature information and the scene feature information, and recommend the top L pieces of information with the highest recommended value to the user, where L is an integer greater than zero.
  • an electronic device including:
  • the feature information acquiring unit is used to acquire the emotional feature information of the user and the scene feature information of the environment where the user is located;
  • An information acquisition unit for acquiring information associated with the emotional characteristic information
  • the information recommendation unit is configured to determine the recommended value of the information based on the emotional feature information and the scene feature information, and recommend the top L pieces of information with the highest recommended value to the user, where L is an integer greater than zero.
  • an embodiment of the present application provides a computer-readable storage medium, including:
  • the feature information acquiring unit is used to acquire the emotional feature information of the user and the scene feature information of the environment where the user is located;
  • An information acquisition unit for acquiring information associated with the emotional characteristic information
  • the information recommendation unit is configured to determine the recommended value of the information based on the emotional feature information and the scene feature information, and recommend the top L pieces of information with the highest recommended value to the user, where L is an integer greater than zero.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the information recommendation method described in any one of the above-mentioned first aspects.
  • the embodiment of the present application has the following beneficial effects: by acquiring the user's current emotional characteristic information and the scene characteristic information of the environment where the user is currently located; acquiring the information associated with the emotional characteristic information; based on The emotional characteristic information and the scene characteristic information determine the recommended value of the information, the top L information with the highest recommended value is recommended to the user, and the user is recommended to meet their personalization by combining the emotional characteristic information and the scene characteristic information
  • the required information makes the recommended information closer to the user’s true emotional feedback, improves the accuracy of the information recommendation, and has strong ease of use and practicality.
  • FIG. 1 is an implementation flowchart of an information recommendation method provided by an embodiment of the present application
  • FIG. 2 is a specific implementation flowchart of a method for acquiring emotional feature information and scene feature information provided by an embodiment of the present application
  • FIG. 3 is a specific implementation flowchart of a method for obtaining emotional feature information according to a first image provided by an embodiment of the present application
  • FIG. 4 is a specific implementation flowchart of another method for obtaining emotional feature information according to a first image provided by an embodiment of the present application
  • FIG. 5 is a specific implementation flowchart of another method for obtaining emotional feature information from a first image provided by an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an information recommendation device provided by an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be construed to mean “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context. ]” or “in response to detection of [condition or event described]”.
  • the information recommendation method provided by the embodiments of the present application can be applied to electronic devices.
  • the electronic device can be any device with image collection, voice collection, sensor data collection and other functions, including but not limited to smart phones, smart home appliances, tablet computers, in-vehicle devices, wearable devices, and augmented reality (AR)/ Virtual reality equipment (virtual reality, VR), etc.
  • AR augmented reality
  • VR Virtual reality equipment
  • the information recommendation method provided in this application can be specifically stored in an electronic device in the form of an application or software, and the electronic device implements the information recommendation method provided in this application by executing the application or software.
  • FIG. 1 shows an implementation process of an information flow recommendation method provided by an embodiment of the present application. The details are as follows:
  • step S101 the current emotional feature information of the user and the scene feature information of the environment where the user is currently located are acquired.
  • the emotional feature information is feature information that can feed back the user's emotions
  • the scene feature information is the feature information that can feed back the environment in which the user is located, that is, the physical entity environment.
  • the emotional feature information may include the user's facial expression, voice (such as at least one of the tone information, speed information, amplitude information, and frequency information of the voice) and/or physical signs (such as body temperature information, pulse At least one of information, breathing information, blood pressure information, blood oxygen information, heart rate information, etc.) and other information.
  • voice such as at least one of the tone information, speed information, amplitude information, and frequency information of the voice
  • physical signs such as body temperature information, pulse At least one of information, breathing information, blood pressure information, blood oxygen information, heart rate information, etc.
  • the user’s facial image can be collected through the camera of the electronic device to determine the user’s facial expression
  • the user’s voice information can be collected through the microphone of the electronic device
  • the temperature sensor that detects the body temperature and the heart rate can be detected by the electronic device.
  • Sensors such as a heart rate sensor, a pulse sensor that detects pulse, a respiration sensor that detects respiratory rate, and a blood sensor that detects blood oxygen collect the user's physical sign information.
  • the scene characteristic information may be determined according to image information collected by a camera of the electronic device and/or environmental sounds collected by a microphone.
  • the user’s image is collected through the camera. Before the next operation behavior, collect user images at an interval of a preset time, such as T seconds, to obtain a set of user images collected by the user between the two operation behaviors, and obtain the user’s emotions based on the collected user images Characteristic information.
  • a preset time such as T seconds
  • the number of collected user images is generally greater than 2, so that the user’s emotional characteristic information can be obtained from the collected user images.
  • the user quickly exits after clicking a news link.
  • the user images collected by the camera may be less than 2 or even zero. Because the collected user images are too few to obtain the user's emotional characteristic information, at this time, the emotional characteristic analysis of this operation behavior is not performed.
  • the scene feature information in addition to the background data collected by the camera, can be obtained through the recording sensor to synchronously record the sound background of the user’s environment, and combine the sound background data and image background data to perfect the user’s location.
  • the scene characteristic information of the environment can be used to recommend information suitable for the user's individual needs.
  • FIG. 2 shows the specific implementation steps of a method for acquiring emotional feature information and scene feature information provided by an embodiment of the present application, which are described in detail as follows:
  • step S201 when a user preset behavior is detected, M groups of user images corresponding to the user preset behavior are acquired.
  • the M is an integer greater than zero.
  • the user preset behavior is specifically the behavior of the user of the electronic device clicking a preset link and browsing related content.
  • the preset links referred to here include, but are not limited to, URL links and video links of the currently browsed page.
  • Each user preset behavior corresponds to M groups of user images, each group of user images is a group of users collected at intervals of a preset time within the duration of the user preset behavior through the camera after the user preset behavior is detected image.
  • the electronic device After the electronic device detects the user preset behavior, until the next user preset behavior is detected, such as when the user exits the current browsing page or clicks another URL link, it starts the camera and collects a set of user images at a preset time interval. . After the electronic device detects the next user preset behavior, the collected user images are a set of user images corresponding to the next user preset behavior.
  • the M groups of user images corresponding to the user preset behavior include but are not limited to a group of user images collected based on the current user preset behavior.
  • it may also include multiple sets of user images collected based on other user preset behaviors before or after the current user preset behavior.
  • M groups of user images corresponding to M user preset behaviors are acquired by means of a sliding window, and corresponding M groups of emotional feature series are acquired based on the M groups of user images.
  • the M groups of emotional feature number series may be emotional feature number series obtained based on the analysis of M user preset behaviors before the current user preset behavior. After obtaining the M sets of emotional feature series, the current emotional feature information of the user is determined according to the M sets of emotional feature series.
  • the number of images in a set of collected user images is preferably a value greater than or equal to 2, that is, each set of user images includes at least two user images.
  • step S202 the M groups of user images are preprocessed to obtain M groups of first images and M groups of second images.
  • the first image is an image containing face data
  • the second image is an image containing background data
  • face recognition is performed on each user image in the set of user images, and the face data in the user images is extracted and cropped to obtain an image containing only the face data, namely
  • data completion is performed on the image after cropping the face data to obtain an image containing only background data, that is, the second image.
  • the number of the first image and the second image is the same as the number of a set of collected user images, that is, there are as many user images as there are corresponding first images and second images.
  • step S203 the M groups of first images are input into a preset first neural network model for processing to obtain emotional feature information.
  • the preset first neural network model is a pre-trained convolutional neural network model.
  • the training of the convolutional neural network model can be performed on the side of the electronic device (end side), or on the side of the cloud server (cloud side), by including a large number of extreme emotion tags or The training of images with diversified emotion labels enables the convolutional neural network model to correctly identify and extract corresponding emotion feature information.
  • the cloud side generally trains the convolutional neural network regularly, and Synchronously update the trained convolutional neural network model to the end-side to improve the accuracy of the end-side extraction of the user's emotional features, so as to provide accurate and personalized information recommendations for the end-side users.
  • user implicit feedback is extremely sensitive privacy data
  • user privacy data is uploaded to the cloud side for analysis and processing, it may cause user privacy leakage and reduce user experience.
  • the preset first neural network model and the preset second neural network model on the end side to obtain the user’s current emotional feature information and the scene feature information of the user’s current environment, so that the User information such as user images does not need to be uploaded to the cloud side for feature analysis and extraction, so that user privacy data does not need to be uploaded to the cloud side, ensuring the security of user privacy, and achieving the purpose of including user privacy.
  • the convolutional neural network model is trained with a regression model with a positive value label and a negative value label, and the positive or negative of the user's emotion can be determined according to the final output value of the regression model Degree of orientation.
  • the convolutional neural network model will output a value that determines the degree of positive and negative emotion of the user, if it will reflect the value of the positive degree of user emotion Set the value as a positive emotion value, and set the value reflecting the degree of negative emotion of the user as a negative emotion value, then according to each first image, a positive emotion value or a negative emotion value will be obtained correspondingly.
  • a set of first images corresponding to a set of positive emotion values and negative emotion values.
  • the convolutional neural network model trained with a regression model with positive and negative numerical labels is a neural network model used to identify and extract user extreme emotional feature information.
  • step S203 includes:
  • FIG. 3 shows the specific implementation steps of a method for obtaining emotional feature information from a first image provided by an embodiment of the present application, which are described in detail as follows:
  • each of the first images is input to the first neural network model, or the first neural network model outputs a positive sentiment value or a negative sentiment value, and obtains data from the first neural network
  • a feature column with a dimension of 1*N is extracted from the convolutional layer of the model.
  • N is an integer greater than zero.
  • the last convolutional layer of the convolutional neural network contains N neurons.
  • a feature column with a dimension of 1*N such as X is extracted from the N neurons.
  • X i 1,X i 2,X i 3,X i 4,...,X i n T.
  • i represents the feature column corresponding to the i-th first image
  • n N.
  • step S302 the feature sequence with a dimension of 1*N and the positive sentiment value are combined into an sentiment positive feature sequence, or the feature sequence with a dimension of 1*N and the negative sentiment are combined
  • the numerical combination is a sequence of emotional negative characteristics.
  • the final regression prediction value is taken as the Y part, and the final regression preset value corresponding to the i-th first image is denoted as Yi.
  • the emotional positive feature sequence or emotional negative feature The sequence can be expressed as [X i 1,X i 2,X i 3,X i 4,...,X i n
  • the feature sequence corresponding to the first image with a number of i is specifically as follows:
  • each first image corresponds to a sequence of positive emotion features or a sequence of negative emotion features, that is, each first image corresponds to a feature with a dimension of 1*N (Part X) And a positive sentiment value or a negative sentiment value (Part Y).
  • step S303 from a set of positive emotion feature series and/or negative emotion feature series corresponding to a set of first images, extract the positive emotion feature series and/or negative emotion corresponding to the largest positive emotion value.
  • the emotional negative feature sequence corresponding to the smallest value is used as a set of emotional feature sequence.
  • a negative emotion feature sequence is used as a set of emotional feature sequence, as follows:
  • the above-mentioned set of emotional feature numbers reflect the emotional feature information corresponding to the current user's preset behavior.
  • the embodiments of the present application provide Another method for obtaining emotional feature information based on the first image is presented.
  • FIG. 4 shows the specific implementation steps of a method for obtaining emotional feature information from a first image according to an embodiment of the present application, which are described in detail as follows:
  • step S401 splicing the M groups of emotional feature sequence with dimension 1*N to obtain an emotional feature sequence with dimension 2M*N.
  • the feature rows with dimensions of 1*N in the M groups of emotional feature sequence are spliced to obtain the following emotional feature sequence:
  • step S402 the positive emotion values corresponding to the M groups of emotional feature numbers are accumulated and averaged to obtain a positive emotion feature with a dimension of 1.
  • all the positive emotion values in the M groups of emotional feature series are accumulated and averaged, and the calculated average value is used as a positive emotion feature with a dimension of 1.
  • step S403 the negative emotion values corresponding to the M groups of emotional feature series are accumulated and averaged to obtain a negative emotion feature with a dimension of 1.
  • all negative emotion values in the M groups of emotional feature series are accumulated and averaged, and the calculated average value is used as a negative emotion feature with a dimension of 1.
  • step S404 the sequence of emotional features with a dimension of 2M*N, the one-dimensional positive emotional features and the one-dimensional negative emotional features are spliced to form a dimension of 2M*N +2 emotional feature information, and the emotional feature sequence with a dimension of 2M*N+2 is used as the emotional feature information.
  • the emotional feature information with a dimension of 2M*N+2 is specifically represented as follows:
  • emotional feature information with a dimension of 2M*N+2 can be understood as emotional feature information that reflects the user's current emotions, and the emotional feature information can accurately reflect the user's current emotions, so that it can be searched according to the user's current emotions. To the associated information is recommended for users.
  • the emotional feature information acquired in FIGS. 3 and 4 is based on extreme emotional feature information, such as emotional feature information acquired when the user is extremely happy or extremely sad.
  • the convolutional neural network model is a neural network model trained with a classification model with multiple emotional labels. After obtaining a set of first images, the classification model is used to Perform feature extraction for each first image in the set of first images, and extract a feature column of a preset dimension from the last convolutional layer of the convolutional neural network model as the feature column corresponding to the first image, and then according to The extracted feature columns corresponding to a set of first images obtain the emotional feature information of the user.
  • the user’s emotional feature information is obtained by aggregating the feature columns corresponding to the set of first images, and the aggregated feature columns are regarded as the Euclidean space system, and the The formula determines the Euclidean center of gravity of the Euclidean space system, and then finds the farthest feature column corresponding to the Euclidean center of gravity from the aggregated feature columns.
  • This feature column is quite different from other feature columns. Select 2P feature columns near the farthest feature column to splice to obtain emotional feature information that can accurately reflect the user's current emotions.
  • determining the center of gravity of the aggregated feature example can also be similar to other distance formulas such as the Mahalanobis distance formula to determine the center of gravity of the aggregated feature column, which is not specifically limited here.
  • FIG. 5 shows the specific implementation steps of another method for obtaining emotional feature information from a first image provided by an embodiment of the present application, which are described in detail as follows:
  • each of the first images is input to the first neural network model, and a feature column with a dimension of 1*N is extracted from the last convolutional layer of the first neural network model.
  • the last convolutional layer of the convolutional neural network contains N neurons.
  • a dimension of 1*N is extracted from the N neurons.
  • the feature sequence is for example X part (X i 1,X i 2,X i 3,X i 4,...,X i n) T.
  • step S502 all the feature columns with a dimension of 1*N extracted from a group of first images are aggregated to obtain an aggregate feature column, and the center of gravity of the aggregate feature column is solved by a preset formula.
  • a group of extracted feature columns with a dimension of 1*N are aggregated together to form an aggregate feature column, which constitutes the Euclidean space system, and the center of gravity of the Euclidean space system can be solved by the Euclidean formula ,
  • the feature column corresponding to the center of gravity is regarded as the Euclidean center of gravity of the aggregated feature column.
  • step S503 from the aggregated feature columns, search for a feature column with the farthest distance from the center of gravity.
  • the feature column farthest from the European center of gravity is actually the feature column with the least similar aggregate feature column, that is, the feature column has the lowest similarity with other feature columns in the aggregate feature column.
  • the user's emotional feature information is determined by the feature column farthest from the European center of gravity, which can be realized as Users recommend more accurate information.
  • step S504 2P feature columns are selected from the aggregate feature column according to the feature column furthest from the center of gravity, and the feature column furthest from the center of gravity forms a dimension of (2P+ 1) A set of emotional feature sequence of *N, and a set of emotional feature sequence of dimension (2P+1)*N as the emotional feature information.
  • P is an integer greater than zero.
  • P is an integer not greater than 3.
  • the one with the farthest distance from the center of gravity is the central feature column in the group of emotional feature series, and the The 2P feature rows before and after the central feature row are feature rows adjacent to the central feature row.
  • the number R of feature rows after the feature row furthest from the center of gravity is less than P
  • the number of feature rows before the feature row furthest from the center of gravity is 2P-R feature columns are selected from the feature columns, and the feature column with the farthest distance from the center of gravity forms a group of emotional feature sequence with a dimension of (2P+1)*N.
  • the feature column farthest from the center of gravity is the first feature column in the aggregated feature column
  • the feature column that is the farthest from the center of gravity Take 2P feature columns from the feature columns after the column, and a feature column with the farthest distance from the center of gravity to form a group of emotional feature numbers with a dimension of (2P+1)*N.
  • the feature row farthest from the center of gravity is the last feature row in the aggregated feature row, then the feature row that is the farthest distance from the center of gravity is selected from the feature row.
  • 2P feature columns are selected from the previous feature columns, and a feature column with the farthest distance from the center of gravity forms a set of emotional feature numbers with a dimension of (2P+1)*N.
  • a set of emotional feature series with a dimension of (2P+1)*N obtained based on the previous user's preset behavior can be used as the user's emotional feature information, so that the electronic device can be based on the emotional feature information as Users recommend information that is more in line with their emotions.
  • step S204 the M groups of second images are input into a preset second neural network model for processing to obtain first scene feature information.
  • the first scene characteristic information is characteristic information reflecting the environment where the user is located. According to the environment where the user is currently located, more accurate information can be recommended for the user.
  • the user’s current emotion is determined to be sad according to the user’s preset behavior
  • the user’s continuous browsing information is sentimental articles
  • the user’s environment is the dormitory
  • the same type of articles can be recommended for the user so that the user can release Its depressed mood
  • relatively easy or funny articles can be recommended for the user, so as to prevent the user from being overly emotional when outdoors.
  • a set of audio information and/or video information corresponding to the user preset behavior is also synchronously acquired, and based on the set of audio information And/or the video information to obtain the second scene feature information, that is, in step S201, it further includes:
  • the audio information and/or video information are input into a preset third neural network model for processing to obtain second scene feature information.
  • audio information or video information is used to supplement and perfect the scene feature information of the environment in which the user is located, so as to further improve the accuracy of the environment judgment transmitted by the user, thereby improving the accuracy of information recommendation.
  • step S103 is specifically:
  • the recommended value of the information is determined based on the emotional characteristic information, the first scene characteristic information, and the second scene characteristic information, and the top L pieces of information with the highest recommended value are recommended to the user.
  • step S102 information associated with the emotional characteristic information is acquired.
  • the emotional feature information is feature information with different emotional tags, and the current emotion of the user can be determined according to the emotional feature information. Therefore, the emotions that are fed back by the emotional feature information can be obtained to obtain information related to the emotional feature information. Linked information.
  • the end-side can search for information corresponding to the emotions fed back by the emotional feature information from its database according to the emotional feature information. For example, if the current user's emotion is happy, it can search for information with a happy label. Recommend to users.
  • the end-side stores the emotional characteristic information. After the contained user information such as user ID, user account and other sensitive information is removed, emotion request parameter information containing only emotions is generated, and the emotion request parameter information is sent to the cloud side, and the cloud side finds information that is consistent with the user’s current emotion parameter information. Emotion-related information and send the information back to the end side.
  • the cloud side does not conduct personalized data mining, but only conducts data mining of group characteristics, such as popularity and emotion-based content analysis. Emotional feedback and the relevance of information content are analyzed, and an inverted index of emotional tags is established to trigger as many correct results as possible from the full amount of information and return the results to the end side.
  • step S102 includes:
  • Step S1021 Perform data preprocessing on the emotional feature information to remove user information in the emotional feature information.
  • the data preprocessing of the emotional feature information is data desensitization processing on the emotional feature information, and the sensitive information in the emotional feature information, that is, user information, such as user account information, user ID information, etc., is removed.
  • Step S1022 Generate emotional request parameter information based on the emotional feature information after removing the user information, and send the emotional request parameter information to the cloud server to instruct the cloud server to find the emotional feature information related to the emotional request parameter information Information flow information of the United Nations.
  • the emotion request parameter information is parameter information containing only emotions generated based on data desensitization of the emotion characteristic information, and is used to instruct the cloud server to search for information associated with the emotion characteristic information.
  • Step S1023 Receive information associated with the emotional request parameter information returned by the cloud server.
  • the end side since the information obtained from the cloud side does not have user privacy data, if the information found by the cloud server is directly recommended to the user, the accuracy of the recommendation is not high, and personalized recommendation cannot be realized for the user. Therefore, the end side also needs to calculate the recommended value of the information sent by the cloud server, and then recommend the top L information with the highest recommended value to the user.
  • step S103 the recommended value of the information is determined based on the emotional feature information and the scene feature information, and the top L pieces of information with the highest recommended value are recommended to the user.
  • L is an integer greater than zero
  • the recommended value is a recommended value obtained after the end-side decision engine scores the acquired information according to the emotional feature information and the scene feature information. For example, a value obtained by comprehensive scoring according to the degree of emotional relevance, the degree of match of the scene, etc. is the recommended value.
  • step S103 is specifically:
  • the emotion feature information and the scene feature information are spliced in a time window manner, the recommended value of the information is determined according to the splicing result, and the top L information with the highest recommended value is recommended to the user.
  • the end-side uses a time window to splice the emotional feature information and the scene feature information, which can recommend the user information that matches the user's current environment and changes in his emotions, improves the accuracy of information recommendation, and Meet the individual needs of users.
  • the feature information determines the recommended value of the information, recommends the top L information with the highest recommended value to the user, and recommends information that meets the user’s individual needs by combining the emotional feature information and the scene feature information to make the recommended information It is closer to the user's real emotional feedback, improves the accuracy of information recommendation, and has strong ease of use and practicality.
  • FIG. 6 shows a structural block diagram of an information recommendation device provided in an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
  • the device includes:
  • the feature information acquiring unit 61 is configured to acquire the emotional feature information of the user and the scene feature information of the environment where the user is located;
  • the information acquiring unit 62 is configured to acquire information associated with the emotional characteristic information
  • the information recommendation unit 63 is configured to determine the recommended value of the information based on the emotional feature information and the scene feature information, and recommend the top L information with the highest recommended value to the user, where L is an integer greater than zero .
  • the characteristic information acquiring unit 61 includes:
  • the user image acquiring subunit is configured to acquire a group of user images corresponding to the user preset behavior when the user preset behavior is detected;
  • the image preprocessing subunit is used to preprocess the M groups of user images to obtain M groups of first images and M groups of second images, where the first images are images containing face data, and the second images are Images containing background data;
  • An emotional feature information acquisition subunit configured to input the M groups of first images into a preset first neural network model for processing to obtain emotional feature information
  • the first scene feature information acquisition subunit is configured to input the M groups of second images into a preset second neural network model for processing to obtain first scene feature information.
  • the emotional feature information acquiring subunit is specifically configured to:
  • the emotional characteristic information acquiring subunit is specifically configured to:
  • the emotional feature information acquiring subunit is further specifically configured to:
  • the emotional feature information acquiring subunit is specifically further used for:
  • 2P feature columns are taken from the aggregated feature column, and the feature column with the farthest distance from the center of gravity forms a dimension (2P+1)*N A set of emotional characteristic numbers, where P is an integer greater than zero;
  • the emotional feature information acquiring subunit is specifically further used for:
  • P are selected from the feature rows before and after the feature row farthest from the center of gravity.
  • the feature sequence and the feature sequence farthest from the center of gravity form a group of emotional feature sequence with a dimension of (2P+1)*N.
  • the emotional feature information acquiring subunit is specifically further used for:
  • the emotional feature information acquiring subunit is specifically further used for:
  • the emotional feature information acquiring subunit is specifically further used for:
  • the feature column with the farthest distance from the center of gravity is the first feature column in the aggregated feature column
  • 2P feature columns are selected from the feature columns after the feature column with the farthest distance from the center of gravity
  • a feature sequence with the farthest distance from the center of gravity form a group of emotional feature sequence with a dimension of (2P+1)*N.
  • the emotional feature information acquiring subunit is specifically further used for:
  • the characteristic information acquiring unit 61 further includes:
  • the audio information and/or video information are input into a preset third neural network model for processing to obtain second scene feature information.
  • the information acquiring unit 62 is specifically configured to:
  • the feature information determines the recommended value of the information, recommends the top L information with the highest recommended value to the user, and recommends information that meets the user’s individual needs by combining the emotional feature information and the scene feature information to make the recommended information It is closer to the user's real emotional feedback, improves the accuracy of information recommendation, and has strong ease of use and practicality.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the electronic device 7 of this embodiment includes: at least one processor 70 (only one is shown in FIG. 7), a processor, a memory 71, and a processor that is stored in the memory 71 and can be processed in the at least one processor.
  • a computer program 72 running on the processor 70, when the processor 70 executes the computer program 72, the steps in any of the foregoing information recommendation method embodiments are implemented.
  • the processor 70 executes the computer program 72, the functions of the units in the foregoing device embodiments, for example, the functions of the units 61 to 63 shown in FIG. 6 are realized.
  • the electronic device 7 may be a computing device such as a mobile phone, a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the electronic device 7 may include, but is not limited to, a processor 70 and a memory 71.
  • FIG. 7 is only an example of the electronic device 7 and does not constitute a limitation on the electronic device 7. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. , For example, can also include input and output devices, network access devices, and so on.
  • the so-called processor 70 may be a central processing unit (Central Processing Unit, CPU), and the processor 70 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 71 may be an internal storage unit of the electronic device 7 in some embodiments, such as a hard disk or a memory of the electronic device 7. In other embodiments, the memory 71 may also be an external storage device of the electronic device 7, such as a plug-in hard disk equipped on the electronic device 7, a smart media card (SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc. Further, the memory 71 may also include both an internal storage unit of the electronic device 7 and an external storage device. The memory 71 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program.
  • a boot loader BootLoader
  • the memory 71 can also be used to temporarily store data that has been output or will be output.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program. When the processor is executed, the steps in the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the computer program product runs on an electronic device, the electronic device can realize the steps in the foregoing method embodiments when the electronic device is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the photographing device/electronic device, recording medium, computer memory, read-only memory (ROM, Read-ONly MeMory), random access memory (RAM, RaNdoM Access MeMory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • RaNdoM Access MeMory electric carrier signal
  • telecommunications signal and software distribution medium for example, U disk, mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请适用于电子设备人工智能(Artificial Intelligence,AI)领域,具体和信息处理领域相关,提供了一种信息推荐方法、装置、电子设备及计算机可读存储介质,该方法包括:通过获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;获取与所述情绪特征信息相关联的信息;基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,通过结合情绪特征信息和场景特征信息为用户推荐满足其个性化需求的信息,使得所推荐的信息与用户的真实情绪反馈更为接近,提高信息推荐的精准度,具有较强的易用性和实用性。

Description

信息推荐方法、装置、电子设备及计算机可读存储介质
本申请要求于2019年12月14日提交国家知识产权局、申请号为201911287188.1、申请名称为“信息推荐方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备人工智能(Artificial Intelligence,AI)领域,具体和信息处理领域相关,尤其涉及信息推荐方法、装置、电子设备及计算机可读存储介质。
背景技术
现有基于主动式反馈的信息推荐方式,需要获取用户的主动式反馈信息,例如点击、评论、分享、点赞等才能做个性化推荐。然而大部分用户只提供极少的主动式反馈,因而导致对主动式反馈比较少的用户很难做到精准的个性化推荐。
另外,主动式反馈很难深层次刻画出用户的真实反馈,例如:用户点击了一篇新闻,但该新闻的内容引起了用户强烈的反感,仅仅是点击行为的反馈无法体现出用户其实是反感该新闻内容的。
因此,现有基于主动式反馈的信息推荐方式无法满足用户对推荐精准度的需求。
发明内容
本申请实施例提供了信息推荐方法、装置、电子设备及计算机可读存储介质信息,可以提高信息推荐的精准度。
第一方面,本申请实施例提供了一种信息推荐方法,包括:
获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;
获取与所述情绪特征信息相关联的信息;
基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
应理解,情绪特征信息为能够反馈用户当前情绪的特征信息,场景特征信息为能够反馈用户当前所处环境也即物理实体环境的特征信息,通过结合两者,能够为用户推荐更为符合其个性化需求的信息,提高信息推荐的精确度。
在第一方面的第二种可能实现方式中,获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息,包括:
在检测到用户预设行为时,获取所述用户预设行为对应的M组用户图像,所述M为大于零的整数;
对所述M组用户图像进行预处理得到M组第一图像和M组第二图像,所述第一图像为包含人脸数据的图像,所述第二图像为包含背景数据的图像;
将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息;
将所述M组第二图像输入预设的第二神经网络模型进行处理,获得第一场景特征信息。
示例性的,用户预设行为具体为电子设备用户点击预设链接并浏览相关内容的行为。这里所指的预设链接包括但不限于当前浏览页面的URL链接、视频链接。
应理解,场景特征信息除了可以通过摄像头采集的背景数据来获取,还可以通过录音传感器同步记录用户所处环境的声音背景,结合声音背景数据和图像背景数据完善用户所处环境的场景特征信息,从而实现为用户推荐适合其个性化需求的信息。
应理解,通过结合用户所处环境的场景特征信息,使得推荐的信息与用户当前的情绪以及其所处的环境更为匹配,提高推荐效果,使得用户体验更好。
在第一方面的第三种可能实现方式中,将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息,包括:
将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理,获得M组情绪化特征数列,对所述M组情绪化特征数列进行拼接处理,将拼接处理后的情绪化特征数列作为所述情绪特征信息。
示例性的,将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理;
将每个所述第一图像输入至所述第一神经网络模型,获得所述第一神经网络模型输出的正向情绪数值或的负向情绪数值,并从所述第一神经网络模型的最后一层卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
将所述维度为1*N的特征列和所述正向情绪数值组合为情绪正向化特征数列,或将所述维度为1*N的特征列和所述负向情绪数值组合为情绪负向化特征数列;
从一组第一图像对应的情绪正向化特征数列和/或情绪负向化特征数列中,抽取正向情绪数值最大对应的情绪正向化特征数列和/或负向情绪数值最小对应的情绪负向化特征数列作为一组情绪化特征数列。
应理解,每个第一图像对应一个维度为1*N的特征列,一组第一图像对应一组维度为1*N的特征列。正向情绪数值和负向情绪数值反映用户极端化的情绪特征,通过用户预设行为对应的一组用户图像得到的一组情绪化特征数列,可以准确地反映用户当前的情绪。
在第一方面的第四种可能实现方式中,所述将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息还包括:
将所述M组情绪化特征数列的维度为1*N的特征列进行拼接,得到一个维度为2M*N的情绪化特征数列;
将所述M组情绪化特征数列对应的正向情绪数值累加求平均值,得到一个维度为1的情绪正向化特征;
将所述M组情绪化特征数列对应的负向情绪数值累加求平均值,得到一个维度为1的情绪负向化特征;
将所述维度为2M*N的情绪化特征数列、所述1维的情绪正向化特征和所述1维的情绪负向化特征进行拼接,得到一个维度为2M*N+2的情绪化特征数列,并将所述维度为2M*N+2的情绪化特征数列作为所述情绪特征信息。
在第一方面的第五种可能实现方式中,所述将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息包括:
将每个所述第一图像输入至所述第一神经网络模型,从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
聚合从一组第一图像中提取的所有维度为1*N的特征列,得到聚合特征列,并通过预设公式求解所述聚合特征列的重心;
从所述聚合特征列中,查找与所述重心距离最远的一个特征列;
根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列,并将所述维度为(2P+1)*N的一组情绪化特征数列作为所述情绪特征信息,所述P为大于零的整数。
在第一方面的第六种可能实现方式中,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
如果在与所述重心距离最远的一个特征列的前后的特征列的个数均大于或等于P,则从在与所述重心距离最远的一个特征列的前后特征列中各取P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在第一方面的第七种可能实现方式中,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
如果在与所述重心距离最远的一个特征列之前的特征列的个数Q小于P,则从在与所述重心距离最远的一个特征列之后的特征列中取2P-Q个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在第一方面的第八种可能实现方式中,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
如果在与所述重心距离最远的一个特征列之后的特征列的个数R小于P,则从在与所述重心距离最远的一个特征列之前的特征列中取2P-R个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在第一方面的第九种可能实现方式中,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
如果与所述重心距离最远的一个特征列为所述聚合特征列中的第一个特征列,则从在与所述重心距离最远的一个特征列之后的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在第一方面的第十种可能实现方式中,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
如果与所述重心距离最远的一个特征列为所述聚合特征列中的最后一个特征列,则从在与所述重心距离最远的一个特征列之前的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
示例性,在所述在检测到用户预设行为时,获取所述用户预设行为对应的一组用户图像中,还包括:
获取所述用户预设行为对应的一组音频信息和/或视频信息。
将所述音频信息和/或视频信息输入预设的第三神经网络模型进行处理,获得第二场景特征信息。
相应的,所述基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户基于所述情绪特征信息和所述场景特征信息对所述信息流进行优化排序,并将排序在前的信息流推荐给用户包括:
基于所述情绪特征信息、所述第一场景特征信息和所述第二场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户。
在第一方面的第十一种可能实现方式中,所述获取与所述情绪特征信息相关联的信息包括:
对所述情绪特征信息进行数据预处理,去除所述情绪特征信息中的用户信息;
根据去除用户信息后的情绪特征信息,生成情绪请求参数信息,并将所述情绪请求参数信息发送至云服务器,以指示所述云服务器查找与所述情绪请求参数信息相关联的信息;
接收所述云服务器返回的与所述情绪请求参数信息相关联的信息。
在第一方面的第十二种可能实现方式中,所述基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数包括:
按照时间窗口的方式将所述情绪特征信息和所述场景特征信息拼接,根据拼接结果确定所述信息的推荐值,并将推荐值最高的前L个信息推荐给所述用户
第二方面,本申请实施例提供了一种信息推荐装置,包括:
特征信息获取单元,用于获取用户的情绪特征信息和用户所处环境的场景特征信息;
信息获取单元,用于获取与所述情绪特征信息相关联的信息;
信息推荐单元,用于基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
第三方面,本申请实施例提供了一种电子设备,包括:
特征信息获取单元,用于获取用户的情绪特征信息和用户所处环境的场景特征信息;
信息获取单元,用于获取与所述情绪特征信息相关联的信息;
信息推荐单元,用于基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
第四方面,本申请实施例提供了一种计算机可读存储介质,包括:
特征信息获取单元,用于获取用户的情绪特征信息和用户所处环境的场景特征信息;
信息获取单元,用于获取与所述情绪特征信息相关联的信息;
信息推荐单元,用于基于所述情绪特征信息和所述场景特征信息确定所述信息的 推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面中任一项所述的信息推荐方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
本申请实施例与现有技术相比存在的有益效果是:通过获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;获取与所述情绪特征信息相关联的信息;基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,通过结合情绪特征信息和场景特征信息为用户推荐满足其个性化需求的信息,使得所推荐的信息与用户的真实情绪反馈更为接近,提高信息推荐的精准度,具有较强的易用性和实用性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种信息推荐方法的实现流程图;
图2是本申请实施例提供的一种获取情绪特征信息和场景特征信息的方法的具体实现流程图;
图3是本申请实施例提供的一种根据第一图像获取情绪特征信息的方法的具体实现流程图;
图4是本申请实施例提供的另一种根据第一图像获取情绪特征信息的方法的具体实现流程图;
图5是本申请实施例提供的另一种根据第一图像获取情绪特征信息的方法的具体实现流程图;
图6是本申请实施例提供的一种信息推荐装置的结构示意图;
图7是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地, 短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的信息推荐方法可以应用于电子设备。该电子设备可以是任意具有图像采集、语音采集、传感器数据采集等功能的设备,包括但不限于智能手机、智能家电、平板电脑、车载设备、可穿戴设备以及增强现实(Augmented Reality,AR)/虚拟现实设备(virtual reality,VR)等。本申请提供的信息推荐方法具体可以以应用程序或软件的形式存储于电子设备,电子设备通过执行该应用程序或软件,实现本申请提供的信息推荐方法。
请参考图1,图1示出了本申请实施例提供的一种信息流推荐方法的实现流程,详述如下:
在步骤S101中,获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息。
在本申请实施例中,情绪特征信息为能够反馈用户情绪的特征信息,场景特征信息为能够反馈用户所处环境也即物理实体环境的特征信息。
需要说明的是,所述情绪特征信息可以包括用户的面部表情、声音(如声音的音调信息、速度信息、振幅信息和频率信息等中的至少一种)和/或体征(如体温信息、脉搏信息、呼吸信息、血压信息、血氧信息和心率信息等中的至少一种)等信息。
示例性的,可以通过电子设备的摄像头采集用户的人脸图像以确定所述用户的面部表情,可以通过电子设备的麦克风采集用户的声音信息,可以通过电子设备探测体温的温度传感器、探测心率的心率传感器、探测脉搏的脉搏传感器、探测呼吸频率的呼吸传感器、探测血氧的血液传感器等传感器采集用户的体征信息。
示例性的,所述场景特征信息可以根据电子设备的摄像头采集的图像信息和/或麦克风采集的环境音确定。
在本申请的一些实施例中,在检测到用户的一个操作行为,比如用户点击某一新闻的链接时,由于用户的面部特征能够准确地反馈用户的情绪,通过摄像头开始采集用户图像,在用户的下一个操作行为之前,间隔预设时间比如T秒采集一次用户图像,将会得到一组用户在两个操作行为之间的采集的用户图像,在根据所采集的用户图像来获取用户的情绪特征信息。
需要说明的是,所采集的用户图像的个数一般大于2,以便于能够根据所采集的 用户图像获取用户的情绪特征信息,一般情况下,比如用户点击某一新闻的链接后又迅速退出,这时摄像头所采集的用户图像有可能少于2甚至于为零,因所采集的用户图像过少而无法获取用户的情绪特征信息,此时,不对此次的操作行为进行情绪特征分析。
在本申请的一些实施例中,场景特征信息除了可以通过摄像头采集的背景数据来获取,还可以通过录音传感器同步记录用户所处环境的声音背景,结合声音背景数据和图像背景数据完善用户所处环境的场景特征信息,从而实现为用户推荐适合其个性化需求的信息。
请参考图2,图2示出了本申请实施例提供的一种获取情绪特征信息和场景特征信息的方法的具体实现步骤,详述如下:
在步骤S201中,在检测到用户预设行为时,获取所述用户预设行为对应的M组用户图像。
在本申请实施例中,所述M为大于零的整数。
作为示例而非限定,用户预设行为具体为电子设备用户点击预设链接并浏览相关内容的行为。这里所指的预设链接包括但不限于当前浏览页面的URL链接、视频链接。每个用户预设行为对应有M组用户图像,每一组用户图像为在检测到用户预设行为后,通过摄像头在该用户预设行为持续时间之内,间隔预设时间采集的一组用户图像。
比如,电子设备在检测到的用户预设行为之后,直至检测到下一用户预设行为之前比如用户退出当前浏览页面或者点击另一URL链接时,启动摄像头并间隔预设时间采集一组用户图像。电子设备在检测到下一用户预设行为后,所采集的用户图像为该下以用户预设行为对应的一组用户图像。
需要说明的是,用户预设行为对应的M组用户图像,包括但不限于基于当前的用户预设行为所采集的一组用户图像。比如还可以包括基于在当前的用户预设行为之前或之后的其他用户预设行为所采集的多组用户图像。
比如,在本申请的一个具体实施例中,通过滑动窗口的方式获取M个用户预设行为对应的M组用户图像,并基于该M组用户图像获取对应的M组情绪化特征数列。
可以理解的是,M组情绪化特征数列可以为基于在当前用户预设行为之前的M个用户预设行为分析得到的情绪化特征数列。在获取M组情绪化特征数列之后,根据该M组情绪化特征数列确定用户当前的情绪特征信息。
可以理解的是,为保证信息推荐的精准度,所采集的一组用户图像中的图像个数优选为大于或等于2的数值,也即每一组用户图像中包含至少两个用户图像。
在步骤S202中,对所述M组用户图像进行预处理得到M组第一图像和M组第二图像。
在本申请实施例中,第一图像为包含人脸数据的图像,第二图像为包含背景数据的图像。
具体地,在采集到一组用户图像后,对该组用户图像中的每个用户图像进行人脸识别,提取所述用户图像中的人脸数据并进行裁剪得到仅包含人脸数据的图像即第一图像,再对裁剪人脸数据后的图像进行数据补全,得到仅包含背景数据的图像即第二图像。
可以理解的是,第一图像和第二图像的个数与所采集的一组用户图像的数量相同,即有多少个用户图像,对应就有多少个第一图像和第二图像。
在步骤S203中,将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息。
在本申请实施例中,预设的第一神经网络模型为预先训练好的卷积神经网络模型。
作为示例而非限定,对卷积神经网络模型的训练,可以在电子设备这一侧(端侧)进行,也可以在云服务器这一侧(云侧)进行,通过大量的包含极端情绪标签或者多样化情绪标签的图像的训练,使得该卷积神经网络模型能够正确识别并提取相应的情绪特征信息。
需要说明的是,当预设的第一神经网络模型设置在端侧时,为了提高卷积神经网络训练的效率和特征提取的准确率,一般由云侧定期对卷积神经网络进行训练,并将训练好的卷积神经网络模型同步更新至端侧,以提高端侧对用户的情绪特征提取的准确率,从而实现为端侧的用户提供精准且符合其个性化需求的信息推荐。
还需要说明的是,由于用户隐式反馈是极其敏感的隐私数据,如果将用户隐私数据上传到云侧进行解析处理,将有可能导致用户隐私泄露,降低了用户的体验,而在本申请中,通过将预设的第一神经网络模型和预设的第二神经网络模型均设置在端侧以获取用户当前的情绪特征信息和用户当前所处环境的场景特征信息,使得从端侧获取的用户信息比如用户图像并不需要上传到云侧进行特征分析及提取,从而使得用户的隐私数据不需要上传到云侧,保证了用户隐私的安全性,达到了包含用户隐私的目的。
在本申请的一个具体实施例中,该卷积神经网络模型训练有一个带有正向化数值标签和负向化数值标签的回归模型,可以根据回归模型最终输出的数值确定用户情绪的正负向化程度。
比如,在将第一图像输入至该卷积神经网络模型后,该卷积神经网络模型将会输出一个确定用户情绪的正负向化程度的数值,如果将反映用户情绪正向化程度的数值设定为正向情绪数值,将反映用户情绪负向化程度的数值设定为负向情绪数值,那么根据每一个第一图像将会对应得到一个正向情绪数值或一个负向情绪数值,相应的一组第一图像对应得到一组正向情绪数值和负向情绪数值。
需要说明的是,正向情绪数值越大,说明用户的情绪越趋向于正向的情绪比如高兴、开心等,负向情绪数值越大,说明用户的情绪越趋向于负向的情绪比如痛苦、伤心等,也即该训练有一个带有正向化数值标签和负向化数值标签的回归模型的卷积神经网络模型为用于识别并提取用户极端化的情绪特征信息的神经网络模型。
作为示例而非限定,步骤S203包括:
将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理,获得M组情绪化特征数列,对所述M组情绪化特征数列进行拼接处理,将拼接处理后的情绪化特征数列作为所述情绪特征信息。
在本申请实施例中,当用户情绪处于极端化时,为了提高情绪特征信息的准确率,获取至少两组情绪化特征数列进行拼接处理,并将拼接处理后的情绪特征数列作为情绪特征信息。
请参考图3,图3示出了本申请实施例提供的一种根据第一图像获取情绪特征信息的方法的具体实现步骤,详述如下:
在步骤S301中,将每个所述第一图像输入至所述第一神经网络模型,或所述第一神经网络模型输出正向情绪数值或负向情绪数值,并从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列。
在本申请实施例中,N为大于零的整数,在将每一组第一图像输入卷积神经网络模型进行回归数值预测时,每个第一图像对应得到一个正向情绪数值或一个负向情绪数值,卷积神经网络模型最终输出的正向情绪数值或负向情绪数值作为最终的回归预测数值。
具体的,卷积神经网络的最后一层卷积层包含有N个神经元,在提取用户的情绪特征信息时,从该N个神经元中抽取出一个维度为1*N的特征列比如X部分(X i1,X i2,X i3,X i4,…,X in) T。其中i表示为第i张第一图像对应的特征列,n=N。
在步骤S302中,将所述维度为1*N的特征列和所述正向情绪数值组合为情绪正向化特征数列,或将所述维度为1*N的特征列和所述负向情绪数值组合为情绪负向化特征数列。
在本申请实施例中,将最终的回归预测数值作为Y部分,第i张第一图像对应的最终的回归预设数值表示为Yi,这时,情绪正向化特征数列或情绪负向化特征数列可以表示为[X i1,X i2,X i3,X i4,…,X in|Yi]。
作为示例而非限定,一组数量为i的第一图像对应的特征数列(该特征数列由不同数值的情绪正向化特征数列和情绪负向化特征数列组成)具体如下:
Figure PCTCN2020124765-appb-000001
可以理解的是,每个第一图像都对应有一个情绪正向化特征数列或一个情绪负向化特征数列,即每个第一图像都对应有一个维度为1*N的特征(X部分)和一个正向情绪数值或负向情绪数值(Y部分)。
在步骤S303中,从一组第一图像对应的情绪正向化特征数列和/或情绪负向化特征数列中,抽取正向情绪数值最大对应的情绪正向化特征数列和/或负向情绪数值最小对应的情绪负向化特征数列作为一组情绪化特征数列。
在本申请实施例中,从一组用户图像对应的特征数列即一组第一图像对应的特征数列中,抽取正向情绪数值最大的一个情绪正向化特征数列和一个负向情绪数值最小的一个情绪负向化特征数列作为一组情绪化特征数列,具体如下:
Figure PCTCN2020124765-appb-000002
需要说明的是,上述一组情绪化特征数列为反映当前用户预设行为对应的情绪特 征信息,然而,为了进一步精确的判断用户的情绪,获取更为精准的情绪特征信息,本申请实施例提供了另一种根据第一图像获取情绪特征信息的方法。
请参考图4,图4示出了本申请实施例提供的一种根据第一图像获取情绪特征信息的方法的具体实现步骤,详述如下:
在步骤S401中,将所述M组情绪化特征数列的维度为1*N的特征列进行拼接,得到一个维度为2M*N的情绪化特征数列。
在本申请实施例中,将M组情绪化特征数列中的维度为1*N的特征列进行拼接,得到如下的情绪化特征数列:
Figure PCTCN2020124765-appb-000003
需要说明的是,在上述维度为2M*N的情绪化特征数列中,m=M。
在步骤S402中,将所述M组情绪化特征数列对应的正向情绪数值累加求平均值,得到一个维度为1的情绪正向化特征。
在本申请实施例中,将M组情绪化特征数列中的所有正向情绪数值累加求平均值,将计算得到的平均值作为一个维度为1的情绪正向化特征。
其中,计算得到的平均值具体如下:
Figure PCTCN2020124765-appb-000004
在步骤S403中,将所述M组情绪化特征数列对应的负向情绪数值累加求平均值,得到一个维度为1的情绪负向化特征。
在本申请实施例中,将M组情绪化特征数列中的所有负向情绪数值累加求平均值,将计算得到的平均值作为一个维度为1的情绪负向化特征。
其中,计算得到的平均值具体如下:
Figure PCTCN2020124765-appb-000005
在步骤S404中,将所述维度为2M*N的情绪化特征数列、所述1维的情绪正向化特征和所述1维的情绪负向化特征进行拼接,组成一个维度为2M*N+2的情绪特征信息,并将所述维度为2M*N+2的情绪化特征数列作为所述情绪特征信息。
在本申请实施例中,维度为2M*N+2的情绪特征信息具体表现如下:
Figure PCTCN2020124765-appb-000006
需要说明的是,维度为2M*N+2的情绪特征信息可以理解为反映用户当前情绪的情绪特征信息,通过该情绪特征信息能够精确地反映用户当前的情绪,从而可以根据用户当前的情绪查找到相关联的信息为用户推荐。
还需要说明的是,图3和图4所获取的情绪特征信息为基于极端化的情绪特征信息,比如在用户极度高兴或极度伤心时获取的情绪特征信息。
在本申请的另一个具体实施例中,该卷积神经网络模型为训练有一个带有多种情绪化标签的分类模型的神经网络模型,在获得一组第一图像后,通过该分类模型对该一组第一图像中的每个第一图像进行特征提取,从该卷积神经网络模型的最后一层卷积层提取一个预设维度的特征列作为第一图像对应的特征列,再根据所提取一组第一图像对应的特征列,得到用户的情绪特征信息。
具体的,根据所提取的一组第一图像对应的特征列,得到用户的情绪特征信息为聚合该一组第一图像对应的特征列,将聚合后的特征列视为欧式空间系,通过欧式公式确定该欧式空间系的欧式重心,再从聚合后的特征列中查找与该欧式重心对应的特征列最远的一个特征列,该特征列与其他特征列存在较大的区别,再在该最远的特征列附近选取2P个特征列拼接以得到能够准确反映用户当前情绪的情绪特征信息。
需要说明的是,确定聚合后的特征例的重心还可以通过类似于其他距离公式比如马氏距离公式来确定聚合特征列的重心,这里不做具体限定。
请参考图5,图5示出了本申请实施例提供的另一种根据第一图像获取情绪特征信息的方法的具体实现步骤,详述如下:
在步骤S501中,将每个所述第一图像输入至所述第一神经网络模型,从所述第一神经网络模型的最后一层卷积层中提取一个维度为1*N的特征列。
在本申请实施例中,卷积神经网络的最后一层卷积层包含有N个神经元,在提取用户的情绪特征信息时,从该N个神经元中抽取出一个维度为1*N的特征列比如X部分(X i1,X i2,X i3,X i4,…,X in) T。其中i表示为第i张第一图像对应的特征列,n=N。
在步骤S502中,聚合从一组第一图像中提取的所有维度为1*N的特征列,得到聚合特征列,并通过预设公式求解所述聚合特征列的重心。
在本申请实施例中,将所提取的一组维度为1*N的特征列聚合在一起形成聚合特征列,该聚合特征列即构成欧式空间系,通过欧式公式可以求解该欧式空间系的重心,将该重心对应所在的特征列视为该聚合特征列的欧式重心。
在步骤S503中,从所述聚合特征列中,查找与所述重心距离最远的一个特征列。
在本申请实施例中,距离欧式重心最远的一个特征列,实际为该聚合特征列最不相似的一个特征列,即该特征列与聚合特征列中的其他特征列的相似度最低。
可以理解的是,在波动变化的情绪中,最不相同的一个情绪其实能够更好地反映用户的情绪变化,通过距离欧式重心最远的一个特征列来确定用户的情绪特征信息,可以实现为用户推荐更为精准的信息。
在步骤S504中,根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列,并将所述维度为(2P+1)*N的一组情绪化特征数列作为所述情绪特征信息。
在本申请实施例中,P为大于零的整数。优选的,P为不大于3的整数。
为进一步提高推荐的精准度,在确定距离欧式重心最远的一个特征列之后,将获取在该特征列附近的2P个特征列,并对2P+1个特征列拼接,以准确获取用户的情绪特征信息,从而提供为用户推荐信息的精准度。
在本申请的一个具体实施例中,如果在与所述重心距离最远的一个特征列的前后的特征列的个数均大于或等于P,则从在与所述重心距离最远的一个特征列的前后特征列中各取P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
需要说明的是,在维度为(2P+1)*N的一组情绪化特征数列中,与所述重心距离最远的一个特征列为该组情绪化特征数列中的中心特征列,与该中心特征列前后2P个特征列为与该中心特征列相邻的特征列。
在本申请的另一个具体实施例中,如果在与所述重心距离最远的一个特征列之前的特征列的个数Q小于P,则从在与所述重心距离最远的一个特征列之后的特征列中取2P-Q个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在本申请的另一个具体实施例中,如果在与所述重心距离最远的一个特征列之后的特征列的个数R小于P,则从在与所述重心距离最远的一个特征列之前的特征列中取2P-R个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在本申请的另一个具体实施例中,如果与所述重心距离最远的一个特征列为所述聚合特征列中的第一个特征列,则从在与所述重心距离最远的一个特征列之后的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在本申请的另一个具体实施例中,如果与所述重心距离最远的一个特征列为所述聚合特征列中的最后一个特征列,则从在与所述重心距离最远的一个特征列之前的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1) *N的一组情绪化特征数列。
在本申请实施例中,基于上一个用户预设行为得到的维度为(2P+1)*N的一组情绪化特征数列可以作为用户的情绪特征信息,使得电子设备可以根据该情绪特征信息为用户推荐更为符合其情绪的信息。
在步骤S204中,将所述M组第二图像输入预设的第二神经网络模型进行处理,获得第一场景特征信息。
在本申请实施例中,第一场景特征信息为反映用户所处环境的特征信息,根据用户当前所处的环境,可以实现为用户推荐更为精准的信息。
比如,当根据用户预设行为确定用户当前情绪为悲伤,且用户持续浏览的信息为伤感类的文章,如果用户所处环境为寝室,则可以为用户推荐同一类型的文章,以使得用户能够释放其压抑的心情;如果用户所处环境为户外,则可以为用户推荐相对轻松或搞笑的文章,避免用户在户外出现情绪过于激动的状态。
可选的,为了获取更为准确的场景特征信息,在检测到用户预设行为时,还同步获取该用户预设行为对应的一组音频信息和/或视频信息,并根据该一组音频信息和/或视频信息获取第二场景特征信息,即在步骤S201时,还包括:
获取所述用户预设行为对应的一组音频信息和/或视频信息。
将所述音频信息和/或视频信息输入预设的第三神经网络模型进行处理,获得第二场景特征信息。
在本申请实施例中,音频信息或视频信息用于补充完善用户所处环境的场景特征信息,用以进一步提高对用户所传环境判断的准确率,从而提高信息推荐的精准度。
相应的,步骤S103具体为:
基于所述情绪特征信息、所述第一场景特征信息和所述第二场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户。
在步骤S102中,获取与所述情绪特征信息相关联的信息。
在本申请实施例中,情绪特征信息为具有不同情绪化标签的特征信息,根据情绪特征信息可以确定用户当前的情绪,因此,可以根据情绪特征信息所反馈的情绪,获取与该情绪特征信息相关联的信息。
在一种可能的实现方式中,端侧可以根据情绪特征信息从其数据库中查找与该情绪特征信息所反馈的情绪相对应的信息,比如当前用户情绪为开心,即可以查找具有开心标签的信息推荐给用户。
在另一种可能的实现方式中,为避免端侧的数据库的信息内容过少或者更新不及时,导致无法为用户提供多样化的信息以满足用户的需求的问题,端侧将情绪特征信息所包含的用户信息比如用户ID、用户账号等敏感信息去除后,生成仅包含情绪的情绪请求参数信息,并将该情绪请求参数信息发送给云侧,由云侧根据该情绪参数信息查找与用户当前情绪相关联的信息并将该信息发送回给端侧。
需要说明的是,由于发送给云侧的情绪请求参数并不包含用户信息,因此云侧并不进行个性化数据挖掘,仅做群体特征的数据挖掘,比如热度和基于情绪的内容分析,并对情绪反馈和信息内容关联性进行分析,建立情绪化标签的倒排索引,以从全量的信息中触发尽可能多的正确结果并将结果返回给端侧。
还需要说明的是,由于包含用户信息的情绪特征信息在去除敏感数据后再上传到云侧,减少了用户信息泄露的可能性,保证了用户隐私的安全性。
具体地,步骤S102包括:
步骤S1021,对所述情绪特征信息进行数据预处理,去除所述情绪特征信息中的用户信息。
在本申请实施例中,对情绪特征信息进行数据预处理为对情绪特征信息进行数据脱敏处理,去除情绪特征信息中的敏感信息即用户信息,比如用户账号信息、用户ID信息等。
步骤S1022,根据去除用户信息后的情绪特征信息,生成情绪请求参数信息,并将所述情绪请求参数信息发送至云服务器,以指示所述云服务器查找与所述情绪请求参数信息情绪特征信息相关联的信息流信息。
在本申请实施例中,情绪请求参数信息为基于对情绪特征信息进行数据脱敏后生成的仅包含情绪的参数信息,用于指示云服务器查找与该情绪特征信息相关联的信息。
步骤S1023,接收所述云服务器返回的与所述情绪请求参数信息相关联的信息。
在本申请实施例中,由于从云侧获取的信息并没有用户隐私数据,如果直接将云服务器查找的信息推荐给用户,推荐的精确度并不高,无法实现为用户进行个性化推荐。因此,端侧还需要对云服务器发送的信息进行推荐值计算后再将推荐值最高的前L个信息推荐给用户。
在步骤S103中,基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户。
在本申请实施例中,L为大于零的整数,推荐值为端侧决策引擎根据情绪特征信息和场景特征信息对获取的信息进行打分后得到的一个推荐值。比如根据情绪关联程度、场景符合程度等进行综合评分得到的一个数值为推荐值。
在一种可能的实现方式中,步骤S103具体为:
按照时间窗口的方式将所述情绪特征信息和所述场景特征信息拼接,根据拼接结果确定所述信息的推荐值,并将推荐值最高的前L个信息推荐给所述用户。
在本申请实施例中,端侧采用时间窗口的方式将情绪特征信息和场景特征信息拼接,可以为用户推荐符合用户当前所处环境以及符合其情绪变化的信息,提高信息推荐的精准度,并满足用户的个性化需求。
在本申请实施例中,通过获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;获取与所述情绪特征信息相关联的信息;基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,通过结合情绪特征信息和场景特征信息为用户推荐满足其个性化需求的信息,使得所推荐的信息与用户的真实情绪反馈更为接近,提高信息推荐的精准度,具有较强的易用性和实用性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的信息推荐方法,图6示出了本申请实施例提供的信息推 荐装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图6,该装置包括:
特征信息获取单元61,用于获取用户的情绪特征信息和用户所处环境的场景特征信息;
信息获取单元62,用于获取与所述情绪特征信息相关联的信息;
信息推荐单元63,用于基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
在一种可能的实现方式中,所述特征信息获取单元61包括:
用户图像获取子单元,用于在检测到用户预设行为时,获取所述用户预设行为对应的一组用户图像;
图像预处理子单元,用于对所述M组用户图像进行预处理得到M组第一图像和M组第二图像,所述第一图像为包含人脸数据的图像,所述第二图像为包含背景数据的图像;
情绪特征信息获取子单元,用于将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息;
第一场景特征信息获取子单元,用于将所述M组第二图像输入预设的第二神经网络模型进行处理,获得第一场景特征信息。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体用于:
将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理,获得M组情绪化特征数列,对所述M组情绪化特征数列进行拼接处理,将拼接处理后的情绪化特征数列作为所述情绪特征信息。
示例性的,所述情绪特征信息获取子单元具体用于:
将每个所述第一图像输入至所述第一神经网络模型进行获得所述第一神经网络模型输出正向情绪数值或负向情绪数值,并从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
将所述维度为1*N的特征列和所述正向情绪数值组合为情绪正向化特征数列,或将所述维度为1*N的特征列和所述负向情绪数值组合为情绪负向化特征数列;
从一组第一图像对应的情绪正向化特征数列和/或情绪负向化特征数列中,抽取正向情绪数值最大对应的情绪正向化特征数列和/或负向情绪数值最小对应的情绪负向化特征数列组合为一组情绪化特征数列。
在另一种可能的实现方式中,所述情绪特征信息获取子单元还具体用于:
将所述M组情绪化特征数列的维度为1*N的特征列进行拼接,得到一个维度为2M*N的情绪化特征数列;
将所述M组情绪化特征数列对应的正向情绪数值累加求平均值,得到一个维度为1的情绪正向化特征;
将所述M组情绪化特征数列对应的负向情绪数值累加求平均值,得到一个维度为1的情绪负向化特征;
将所述维度为2M*N的情绪化特征数列、所述1维的情绪正向化特征和所述1维的情绪负向化特征进行拼接,得到一个维度为2M*N+2的情绪特征信息,并将所述维度为 2M*N+2的情绪化特征数列作为所述情绪特征信息。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
将每个所述第一图像输入至所述第一神经网络模型进行情绪特征提取,从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
聚合从一组第一图像中提取的所有维度为1*N的特征列,得到聚合特征列,并通过预设公式求解所述聚合特征列的重心;
从所述聚合特征列中,查找与所述重心距离最远的一个特征列;
根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列,所述P为大于零的整数;
将所述维度为(2P+1)*N的一组情绪化特征数列组合为所述情绪特征信息。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
如果在与所述重心距离最远的一个特征列的前后的特征列的个数均大于或等于P,则从在与所述重心距离最远的一个特征列的前后特征列中各取P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
如果在与所述重心距离最远的一个特征列之前的特征列的个数Q小于P,则从在与所述重心距离最远的一个特征列之后的特征列中取2P-Q个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
如果在与所述重心距离最远的一个特征列之后的特征列的个数R小于P,则从在与所述重心距离最远的一个特征列之前的特征列中取2P-R个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
如果与所述重心距离最远的一个特征列为所述聚合特征列中的第一个特征列,则从在与所述重心距离最远的一个特征列之后的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在另一种可能的实现方式中,所述情绪特征信息获取子单元具体还用于:
如果与所述重心距离最远的一个特征列为所述聚合特征列中的最后一个特征列,则从在与所述重心距离最远的一个特征列之前的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
在一种可能的实现方式中,所述特征信息获取单元61还包括:
获取所述用户预设行为对应的一组音频信息和/或视频信息。
将所述音频信息和/或视频信息输入预设的第三神经网络模型进行处理,获得第二场景特征信息。
所述信息获取单元62,具体用于:
对所述情绪特征信息进行数据预处理,去除所述情绪特征信息中的用户信息;
根据去除用户信息后的情绪特征信息,生成情绪请求参数信息,并将所述情绪请求参数信息发送至云服务器,以指示所述云服务器查找与所述情绪请求参数信息相关 联的信息;
接收所述云服务器返回的与所述情绪请求参数信息相关联的信息。
在本申请实施例中,通过获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;获取与所述情绪特征信息相关联的信息;基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,通过结合情绪特征信息和场景特征信息为用户推荐满足其个性化需求的信息,使得所推荐的信息与用户的真实情绪反馈更为接近,提高信息推荐的精准度,具有较强的易用性和实用性。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图7为本申请实施例提供的电子设备的结构示意图。如图7所示,该实施例的电子设备7包括:至少一个处理器70(图7中仅示出一个)处理器、存储器71以及存储在所述存储器71中并可在所述至少一个处理器70上运行的计算机程序72,所述处理器70执行所述计算机程序72时实现上述任意各个信息推荐方法实施例中的步骤。或者,所述处理器70执行所述计算机程序72时实现上述各装置实施例中各单元的功能,例如图6所示单元61至63的功能。
所述电子设备7可以是手机、桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该电子设备7可包括,但不仅限于,处理器70、存储器71。本领域技术人员可以理解,图7仅仅是电子设备7的举例,并不构成对电子设备7的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器70可以是中央处理单元(Central Processing Unit,CPU),该处理器70还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器71在一些实施例中可以是所述电子设备7的内部存储单元,例如电子设备7的硬盘或内存。所述存储器71在另一些实施例中也可以是所述电子设备7的外 部存储设备,例如所述电子设备7上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器71还可以既包括所述电子设备7的内部存储单元也包括外部存储设备。所述存储器71用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器71还可以用于暂时地存储已经输出或者将要输出的数据本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/电子设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-ONlyMeMory)、随机存取存储器(RAM,RaNdoM Access MeMory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种信息推荐方法,其特征在于,包括:
    获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;
    获取与所述情绪特征信息相关联的信息;
    基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
  2. 如权利要求1所述的方法,其特征在于,所述获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息包括:
    在检测到用户预设行为时,获取所述用户预设行为对应的M组用户图像,所述M为大于零的整数;
    对所述M组用户图像进行预处理得到M组第一图像和M组第二图像,所述第一图像为包含人脸数据的图像,所述第二图像为包含背景数据的图像;
    将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息;
    将所述M组第二图像输入预设的第二神经网络模型进行处理,获得第一场景特征信息。
  3. 如权利要求2所述的方法,其特征在于,将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息,包括:
    将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理,获得M组情绪化特征数列,对所述M组情绪化特征数列进行拼接处理,将拼接处理后的情绪化特征数列作为所述情绪特征信息。
  4. 如权利要求3所述的方法,其特征在于,将所述M组第一图像中的每一组第一图像输入预设的第一神经网络模型进行处理,包括:
    将每个所述第一图像输入至所述第一神经网络模型,获得所述第一神经网络模型输出的正向情绪数值或负向情绪数值,并从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
    将所述维度为1*N的特征列和所述正向情绪数值组合为情绪正向化特征数列,或将所述维度为1*N的特征列和所述负向情绪数值组合为情绪负向化特征数列;
    从一组第一图像对应的情绪正向化特征数列和/或情绪负向化特征数列中,抽取正向情绪数值最大对应的情绪正向化特征数列和/或负向情绪数值最小对应的情绪负向化特征数列作为一组情绪化特征数列。
  5. 如权利要求3所述的方法,其特征在于,所述对所述M组情绪化特征数列进行拼接处理,将拼接处理后的情绪化特征数列作为所述情绪特征信息,包括:
    将所述M组情绪化特征数列的维度为1*N的特征列进行拼接,得到一个维度为2M*N的情绪化特征数列;
    将所述M组情绪化特征数列对应的正向情绪数值累加求平均值,得到一个维度为1的情绪正向化特征;
    将所述M组情绪化特征数列对应的负向情绪数值累加求平均值,得到一个维度为1的情绪负向化特征;
    将所述维度为2M*N的情绪化特征数列、1维的情绪正向化特征和1维的情绪负 向化特征进行拼接,得到一个维度为2M*N+2的情绪化特征数列,并将所述维度为2M*N+2的情绪化特征数列作为所述情绪特征信息。
  6. 如权利要求2所述的方法,其特征在于,将所述M组第一图像输入预设的第一神经网络模型进行处理,获得情绪特征信息,包括:
    将每个所述第一图像输入至所述第一神经网络模型,从所述第一神经网络模型的卷积层中提取一个维度为1*N的特征列,所述N为大于零的整数;
    聚合从一组第一图像中提取的所有维度为1*N的特征列,得到聚合特征列,并通过预设公式求解所述聚合特征列的重心;
    从所述聚合特征列中,查找与所述重心距离最远的一个特征列;
    根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列,并将所述维度为(2P+1)*N的一组情绪化特征数列作为所述情绪特征信息,所述P为大于零的整数。
  7. 如权利要求6所述的方法,其特征在于,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
    如果在与所述重心距离最远的一个特征列的前后的特征列的个数均大于或等于P,则从在与所述重心距离最远的一个特征列的前后特征列中各取P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
  8. 如权利要求6所述的方法,其特征在于,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
    如果在与所述重心距离最远的一个特征列之前的特征列的个数Q小于P,则从在与所述重心距离最远的一个特征列之后的特征列中取2P-Q个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
  9. 如权利要求6所述的方法,其特征在于,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
    如果在与所述重心距离最远的一个特征列之后的特征列的个数R小于P,则从在与所述重心距离最远的一个特征列之前的特征列中取2P-R个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
  10. 如权利要求6所述的方法,其特征在于,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
    如果与所述重心距离最远的一个特征列为所述聚合特征列中的第一个特征列,则从在与所述重心距离最远的一个特征列之后的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
  11. 如权利要求6所述的方法,其特征在于,所述根据与所述重心距离最远的一个特征列,从所述聚合特征列中取2P个特征列,和与所述重心距离最远的一个特征列 组成一个维度为(2P+1)*N的一组情绪化特征数列包括:
    如果与所述重心距离最远的一个特征列为所述聚合特征列中的最后一个特征列,则从在与所述重心距离最远的一个特征列之前的特征列中取2P个特征列,和与所述重心距离最远的一个特征列组成一个维度为(2P+1)*N的一组情绪化特征数列。
  12. 如权利要求2所述的方法,其特征在于,在检测到用户预设行为时,还包括:
    获取所述用户预设行为对应的一组音频信息和/或视频信息;
    将所述音频信息和/或视频信息输入预设的第三神经网络模型进行处理,获得第二场景特征信息;
    相应的,所述基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户包括:
    基于所述情绪特征信息、所述第一场景特征信息和所述第二场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户。
  13. 如权利要求1至12任一所述的方法,其特征在于,所述获取与所述情绪特征信息相关联的信息,包括:
    对所述情绪特征信息进行数据预处理,去除所述情绪特征信息中的用户信息;
    根据去除用户信息后的情绪特征信息,生成情绪请求参数信息,并将所述情绪请求参数信息发送至云服务器,以指示所述云服务器查找与所述情绪请求参数信息相关联的信息;
    接收所述云服务器返回的与所述情绪请求参数信息相关联的信息。
  14. 一种信息推荐装置,其特征在于,包括:
    特征信息获取单元,用于获取用户当前的情绪特征信息和所述用户当前所处环境的场景特征信息;
    信息获取单元,用于获取与所述情绪特征信息相关联的信息;
    信息推荐单元,基于所述情绪特征信息和所述场景特征信息确定所述信息的推荐值,将推荐值最高的前L个信息推荐给所述用户,所述L为大于零的整数。
  15. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至13任一项所述的方法。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至13任一项所述的方法。
PCT/CN2020/124765 2019-12-14 2020-10-29 信息推荐方法、装置、电子设备及计算机可读存储介质 WO2021114936A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911287188.1 2019-12-14
CN201911287188.1A CN111177459A (zh) 2019-12-14 2019-12-14 信息推荐方法、装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021114936A1 true WO2021114936A1 (zh) 2021-06-17

Family

ID=70650230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124765 WO2021114936A1 (zh) 2019-12-14 2020-10-29 信息推荐方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111177459A (zh)
WO (1) WO2021114936A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177459A (zh) * 2019-12-14 2020-05-19 华为技术有限公司 信息推荐方法、装置、电子设备及计算机可读存储介质
CN111881348A (zh) * 2020-07-20 2020-11-03 百度在线网络技术(北京)有限公司 信息处理方法、装置、设备以及存储介质
CN111870961B (zh) * 2020-08-12 2023-11-03 网易(杭州)网络有限公司 游戏中的信息推送方法、装置、电子设备及可读存储介质
CN112398952A (zh) * 2020-12-09 2021-02-23 英华达(上海)科技有限公司 电子资源推送方法、系统、设备及存储介质
CN113010725B (zh) * 2021-03-17 2023-12-26 平安科技(深圳)有限公司 演奏乐器的选择方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010058134A1 (fr) * 2008-11-19 2010-05-27 Alcatel Lucent Procede et dispositif d'enregistrement de donnees representatives de sentiments ressentis par des personnes dans des lieux localisables et serveur associe
CN105005777A (zh) * 2015-07-30 2015-10-28 科大讯飞股份有限公司 一种基于人脸的音视频推荐方法及系统
US20160358225A1 (en) * 2015-06-08 2016-12-08 Samsung Electronics Co., Ltd. Method and apparatus for providing content
CN108509660A (zh) * 2018-05-29 2018-09-07 维沃移动通信有限公司 一种播放对象推荐方法及终端设备
CN110139025A (zh) * 2018-09-29 2019-08-16 广东小天才科技有限公司 一种基于拍照行为的社交用户推荐方法及可穿戴设备
CN110321477A (zh) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 信息推荐方法、装置、终端及存储介质
CN111177459A (zh) * 2019-12-14 2020-05-19 华为技术有限公司 信息推荐方法、装置、电子设备及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197185A (zh) * 2017-12-26 2018-06-22 努比亚技术有限公司 一种音乐推荐方法、终端及计算机可读存储介质
CN109145871B (zh) * 2018-09-14 2020-09-15 广州杰赛科技股份有限公司 心理行为识别方法、装置与存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010058134A1 (fr) * 2008-11-19 2010-05-27 Alcatel Lucent Procede et dispositif d'enregistrement de donnees representatives de sentiments ressentis par des personnes dans des lieux localisables et serveur associe
US20160358225A1 (en) * 2015-06-08 2016-12-08 Samsung Electronics Co., Ltd. Method and apparatus for providing content
CN105005777A (zh) * 2015-07-30 2015-10-28 科大讯飞股份有限公司 一种基于人脸的音视频推荐方法及系统
CN108509660A (zh) * 2018-05-29 2018-09-07 维沃移动通信有限公司 一种播放对象推荐方法及终端设备
CN110139025A (zh) * 2018-09-29 2019-08-16 广东小天才科技有限公司 一种基于拍照行为的社交用户推荐方法及可穿戴设备
CN110321477A (zh) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 信息推荐方法、装置、终端及存储介质
CN111177459A (zh) * 2019-12-14 2020-05-19 华为技术有限公司 信息推荐方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN111177459A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021114936A1 (zh) 信息推荐方法、装置、电子设备及计算机可读存储介质
CN110321477B (zh) 信息推荐方法、装置、终端及存储介质
CN110162593B (zh) 一种搜索结果处理、相似度模型训练方法及装置
US10706094B2 (en) System and method for customizing a display of a user device based on multimedia content element signatures
WO2022141861A1 (zh) 情感分类方法、装置、电子设备及存储介质
WO2020143156A1 (zh) 热点视频标注处理方法、装置、计算机设备及存储介质
WO2018005701A1 (en) Video to data
WO2015135324A1 (zh) 图片排序方法及终端
CN110321845B (zh) 一种从视频中提取表情包的方法、装置及电子设备
WO2021237907A1 (zh) 基于多分类器的风险识别方法、装置、计算机设备及存储介质
WO2022105118A1 (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
EP2531913A2 (en) Image tagging based upon cross domain context
CN108959323B (zh) 视频分类方法和装置
WO2015021937A1 (zh) 用户推荐方法和装置
WO2019232883A1 (zh) 推送保险产品的方法、装置、计算机设备和存储介质
CN113395578A (zh) 一种提取视频主题文本的方法、装置、设备及存储介质
CN110610125A (zh) 基于神经网络的牛脸识别方法、装置、设备及存储介质
CN112995414B (zh) 基于语音通话的行为质检方法、装置、设备及存储介质
CN112418059A (zh) 一种情绪识别的方法、装置、计算机设备及存储介质
CN112685596B (zh) 视频推荐方法及装置、终端、存储介质
CN111506733A (zh) 对象画像的生成方法、装置、计算机设备和存储介质
CN114090766A (zh) 视频文本筛选方法、装置及电子设备
CN110019763B (zh) 文本过滤方法、系统、设备及计算机可读存储介质
CN111275683B (zh) 图像质量评分处理方法、系统、设备及介质
JP2012168986A (ja) 選択されたコンテンツアイテムをユーザーに提供する方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20900662

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20900662

Country of ref document: EP

Kind code of ref document: A1