WO2020253360A1 - 应用程序的内容展示方法、装置、存储介质和计算机设备 - Google Patents

应用程序的内容展示方法、装置、存储介质和计算机设备 Download PDF

Info

Publication number
WO2020253360A1
WO2020253360A1 PCT/CN2020/086149 CN2020086149W WO2020253360A1 WO 2020253360 A1 WO2020253360 A1 WO 2020253360A1 CN 2020086149 W CN2020086149 W CN 2020086149W WO 2020253360 A1 WO2020253360 A1 WO 2020253360A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
currently logged
content
interested
selected number
Prior art date
Application number
PCT/CN2020/086149
Other languages
English (en)
French (fr)
Inventor
祁斌
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020253360A1 publication Critical patent/WO2020253360A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • This application relates to the field of artificial intelligence biometrics technology. Specifically, this application relates to a method, device, storage medium, and computer equipment for displaying application content.
  • APP Application refers to a third-party application installed in a smart device.
  • the emergence of APP has perfected the deficiencies of the original system, convenient and enriched users' lives.
  • APP also displays some promotional information such as advertising and marketing content. Take the Ping An Life APP as an example.
  • the APP not only displays the contents of insurance policy contract management, but also displays online product supermarkets, allowing users to purchase and manage various financial products anytime and anywhere, and also displays some customer activities and so on.
  • the inventor realizes that the content displayed by APPs on the market is fixed in advance, and there is a high probability that users are not interested in, or even disgusted with, the content of the promotion information, which leads to the poor effectiveness of this kind of promotion information content display method. .
  • this application proposes an application content display method, device, storage medium, and computer equipment to improve the effectiveness of the content display of the promotion information in the application.
  • the embodiments of the present application provide a method for displaying content of an application program, including:
  • the emotion type of the currently logged-in user determine the selected number of content that the currently logged-in user is not interested in;
  • a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
  • an application content display device including:
  • the face image acquisition module is used to acquire the face image of the currently logged-in user of the application
  • An emotion type recognition module configured to perform micro expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user;
  • a preference determination module configured to determine the currently logged-in user's preference for each content in the application according to the acquired historical behavior data of the currently logged-in user in the application;
  • the content determination module is configured to determine the content that is of interest to and that is not of interest to the currently logged-in user according to the preference of the currently logged-in user for each content;
  • the selected number determining module is configured to determine the selected number of content that the currently logged-in user is not interested in according to the emotion type of the currently logged-in user;
  • the display module is used to select a corresponding number of content from the content that the currently logged-in user is not interested in, and display the corresponding number of content and the content that the currently logged-in user is interested in.
  • the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, a method for displaying the content of an application program is implemented:
  • the content display method of the application program includes:
  • a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
  • the embodiments of the present application also provide a computer device, the computer device including:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors implement an application content display method:
  • the content display method of the application program includes:
  • a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
  • the above-mentioned application content display method, device, storage medium and computer equipment recognize the emotion type of the currently logged-in user through micro-expressions, combine the emotion type and habit preference of the currently logged-in user, dynamically adjust the displayed content, and enrich the application
  • the user has a greater probability of being interested in the content of the promotion information displayed in the application, thus effectively improving the effectiveness of the content display of the promotion information.
  • FIG. 1 is a schematic diagram of a content display method of an application program according to an embodiment of the application
  • FIG. 2 is a schematic diagram of an application content display device according to an embodiment of the application
  • Fig. 3 is a schematic diagram of a computer device according to an embodiment of the application.
  • FIG. 1 it is an embodiment of an application content display method, and the method includes:
  • the APP can be a video playback APP, an insurance APP, or other types of apps. Considering that the user's preference for each content needs to be obtained later, it is necessary to obtain the face image of the user currently logged in to the APP.
  • the time to obtain the face image can be when the user just logs in to the app, when the app is switched from running in the background to the foreground, or when the app is running in the foreground to obtain the facial image at regular intervals.
  • the front camera of the terminal where the APP is located can be directly called to take the face image. It should be understood that this application does not limit the way to obtain the face image.
  • S120 Perform micro-expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user.
  • Micro-expression is a psychological term. People express their inner feelings to each other by making some facial expressions. Between different expressions that people make, or in a certain expression, other information will be "leaked" on the face. This information is called micro-expressions. Emotion types include happiness, sadness, fear, anger, etc. By performing micro-expression recognition on facial images, the user's current emotional state, such as happy, sad, etc., can be identified.
  • the historical behavior data is the operation behavior data of each content in the APP before the currently logged in user, for example, clicks on the content, purchases the products displayed by the content, comments on the content, and so on.
  • the historical behavior data of the currently logged-in user can be obtained from the server corresponding to the APP through crawler technology. For the convenience of calculation, only the historical behavior data of the currently logged-in user in the last few days can be counted.
  • Preference is used to characterize the degree of interest of the currently logged-in user in each content, or whether each content meets the needs of the user, or the importance of each content to the user.
  • the current logged-in user if the currently logged-in user is an old customer, the current logged-in user’s preference for each content can be pre-calculated based on historical behavior data, and this part of the data can be directly retrieved when content recommendation; if the currently logged-in user is a new customer, Set the preference of the content selected by the currently logged-in user during registration to 1, and set the preference of the unselected content to 0.
  • S140 Determine the content that is of interest to the currently logged-in user and the content that is not of interest according to the preference of the currently logged-in user for each content.
  • a threshold can be preset. When the preference is greater than the threshold, it is determined that the content is of interest to the currently logged-in user, and when the preference is less than or equal to the threshold, it is determined that the content is not of interest to the currently logged-in user.
  • the content of interest that is filtered out can be formed into one set, and the content that is not of interest can be formed into another set.
  • the application determines the amount of content that the currently logged-in user is not interested in that needs to be displayed according to the emotion type of the currently logged-in user, so as to dynamically adjust the APP currently displaying uninterested content.
  • This application enriches the content display timing of the promotion information in the application, so that users have a greater probability of being interested in the content of the promotion information displayed in the application, thus effectively improving the effectiveness of the content display of the promotion information, and at the same time avoiding the screen The waste of display space.
  • the performing micro-expression recognition on the face image of the currently logged-in user to determine the emotion type of the currently logged-in user includes:
  • the inventors of the present application have discovered through research that the characteristics of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead can well characterize the user’s emotional type, and therefore can be used in the existing technology.
  • the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead are extracted from the face image. These features constitute the feature vector of the currently logged-in user.
  • S1202. Input the feature vector of the currently logged-in user into a pre-built micro-expression recognition model, and the micro-expression recognition model recognizes the emotion type of the currently logged-in user.
  • the micro-expression recognition model which is used to calculate the emotion type according to the feature vector of the face image.
  • the micro-expression recognition model is constructed in the following manner: obtaining a sample face image and an identification of the sample face image; the identification is used to characterize the emotion type of the sample face image; The features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead are extracted from the sample face image, and the feature vectors of the sample face image are formed by the extracted features; The feature vector of the face image and the identifier are input to the convolutional neural network for training to obtain the micro-expression recognition model.
  • the identifier is used to determine the error of the convolutional neural network, and the parameters of the convolutional neural network can be further adjusted according to the error. Until the accuracy of the convolutional neural network is greater than the preset value, the convolutional neural network obtained at this time is the micro-expression recognition model.
  • the feature vector of the currently logged-in user is input into the micro-expression recognition model, and the micro-expression recognition model outputs the emotion type of the currently logged-in user.
  • the performing micro-expression recognition on the face image of the currently logged-in user to determine the emotion type of the currently logged-in user includes:
  • S120a Extract the features of the corners of the mouth, cheeks, eyelids and the tail of the eyes from the face image of the currently logged-in user, and each extracted feature constitutes a feature vector of the currently logged-in user.
  • this application mainly considers how to display advertisements and other content in the APP at the right time to increase the conversion rate of the business. Only simply identify the emotional type of happiness.
  • the inventors of the present application have discovered through research that the facial movements when people are happy generally include: mouth corners are raised, cheeks are wrinkled, eyelids shrink, and crow's feet are formed at the tail of the eyes. Therefore, the corners of the mouth, cheeks, and cheeks can be extracted from the face image.
  • the features of the eyelid and the tail of the eye are extracted from each feature to form the feature vector of the currently logged-in user, and the specific extraction method can be implemented in the existing method in the prior art.
  • the preset feature vector includes pre-stored features of the corners of the mouth, cheeks, eyelids, and tail of the eyes of the currently logged-in user when happy .
  • the features of the corners of the mouth, cheeks, eyelids, and the tail of the eyes when each user is happy are stored in advance.
  • retrieve the pre-stored features of the mouth, cheeks, eyelids and tail of the eyes of the currently logged-in user when they are happy, and retrieve these features with the corresponding features in the feature vector of the currently logged-in user The similarity calculation is performed, and the specific similarity calculation method can be implemented according to an existing method in the prior art.
  • S120c If the similarity is greater than a preset threshold, determine that the emotion type of the currently logged-in user is happy; otherwise, determine that the emotion type of the currently logged-in user is unhappy.
  • the preset threshold can be set according to actual needs. After the similarity is calculated, if the similarity is greater than the preset threshold, it means that the currently logged-in user is in a happy state; otherwise, it indicates that the currently logged-in user is in an unhappy state.
  • the preference of the currently logged-in user for each content in the APP can be obtained according to the collected historical behavior data of the currently logged-in user for each content in the APP.
  • the currently logged-in user is in the The historical behavior data in the application to determine the currently logged-in user's preference for each content in the application includes:
  • S1301 calculate the number of clicks, the number of purchases, and the number of comments made by the currently logged-in user on each content.
  • the number of clicks, the number of purchases, and the number of comments posted are counted.
  • the specific counting method can be implemented according to the existing method in the prior art.
  • the weight of the number of clicks, the weight of the number of comments, and the weight of the number of purchases are sequentially increased, that is, the weight of the number of clicks is less than the number of comments.
  • the weight of the number of comments posted is less than the weight of the number of purchases.
  • the currently logged-in user’s preference for a certain content is calculated by the following formula: the weight of the number of clicks * the number of clicks + the weight of the number of purchases * the number of purchases + the weight of the number of comments * the number of comments, the number of clicks in this formula is the current login The number of times the user clicks on the content, the number of purchases is the number of purchases of the content by the currently logged-in user, and the number of comments posted is the number of comments the currently logged-in user has made on the content.
  • the emotion The types include happy and unhappy; the determining the selected number of content that the currently logged-in user is not interested in according to the emotional type of the currently logged-in user includes:
  • the first value is generally a positive integer and can be set according to actual needs, as long as it is greater than the second value that follows.
  • the content whose quantity is the first value is selected from all the content that is not of interest to the currently logged-in user. It can be selected according to custom rules, such as selecting the top N content (N is the first value) in the order of preference from high to low, or selecting N content with preference within a certain range, etc. .
  • the content of the second value is selected from all the content that is not of interest to the currently logged-in user. It can be selected according to custom rules, such as selecting the top M content (M is the second value) in the order of preference from high to low, or selecting M content with preference within a certain range, etc. .
  • the happy emotion types are divided into different emotion intensities, and the unhappy emotion types are divided into different emotion intensities.
  • Emotion intensity is used to represent the intensity of emotion.
  • the level of emotional intensity can actually be determined. For example, the emotional intensity of happy emotional types is from high to low, such as carnival and happy, and the emotional intensity of unhappy emotional types is from high. Despair, grief, sadness, sympathy and composure are low.
  • the emotion type of the currently logged-in user is happy, it is determined that the selected number of content that the currently logged-in user is not interested in is a preset first A value, including:
  • the micro-expression characteristics corresponding to each emotional intensity of the happy emotion type Pre-set the micro-expression characteristics corresponding to each emotional intensity of the happy emotion type, and determine the emotion type of the currently logged-in user as happy and happy emotion intensity according to the micro-expression recognition result of the face image of the currently logged-in user. It should be noted that, when there are multiple micro-expression features, as long as the micro-expression features larger than the preset number in the micro-expression recognition result of the currently logged-in user match the micro-expression features corresponding to the preset emotional intensity.
  • the preset happy mood type is divided into two emotional intensities: carnival and happy.
  • the opening and closing angles of the lips corresponding to carnival are from A to B, and the opening and closing angles of the lips corresponding to happy are from C to D.
  • the opening and closing angle of the lips is E, and E is between A and B, so the emotional intensity of the currently logged-in user's happiness is carnival.
  • S1501b Determine the selected number corresponding to the happy emotional intensity of the currently logged-in user according to the preset first corresponding relationship between each happy emotional intensity and each selected number; the happy emotional intensity and selection in the first corresponding relationship
  • the number is proportional to the relationship, that is, the higher the emotional intensity of happiness, the greater the number of content selections that the currently logged-in user is not interested in.
  • the number of selections corresponding to the intensity of each happy emotion is preset, and after the emotional intensity of the currently logged-in user is determined, according to the corresponding relationship, the number of content that the currently logged-in user needs to select from content that is not of interest can be found, that is, the number of selections.
  • the preset emotional type of happiness is divided into two emotional intensities: carnival and happy.
  • the number of selections corresponding to carnival is 50, and the number of selections corresponding to happy is 40. It is determined that the currently logged-in user's happy emotional intensity is carnival. Then the selected number is 50, and the currently logged-in user needs to select 50 content from the content that is not of interest.
  • the emotional type of the currently logged-in user when the emotional type of the currently logged-in user is unhappy, it is determined that the selected number of content that the currently logged-in user is not interested in is a preset
  • the second value of includes:
  • the preset emotional types of unhappiness are divided into despair, grief, sadness, sympathy, and calm.
  • the opening and closing angles of the lips corresponding to despair are from a to b
  • the opening and closing angles of the lips corresponding to grief are from c to d.
  • the opening and closing angles of the corresponding lips are e to f
  • the opening and closing angles of the corresponding sympathy lips are g to h
  • the opening and closing angles of the lips corresponding to sedation are i to j
  • the opening and closing angle of the lips of the currently logged-in user is k
  • k belongs to between e and f, so the emotional intensity of the unhappy emotion of the currently logged-in user is sad.
  • S1502b Determine the selected number corresponding to the unhappy emotional intensity of the currently logged-in user according to the preset second corresponding relationship between each unhappy emotion intensity and each selected number; the unhappy emotion in the second corresponding relationship
  • the relationship between the intensity and the number of selections is inversely proportional, that is, the higher the intensity of the unhappy emotion, the less the number of selections of content that the currently logged-in user is not interested in.
  • the preset unhappy emotion types are divided into despair, grief, sadness, sympathy, and sedation.
  • the number of choices corresponding to despair is 6, the number of choices corresponding to grief is 5, the number of choices corresponding to sadness is 4, and the number of choices corresponding to sympathy is 4.
  • the number of selections is 3, and the number of selections corresponding to sedation is 2. It is determined that the current logged-in user's unhappy emotional intensity is sad, then the selected number is 4, and the current logged-in user needs to select 4 content from uninteresting content.
  • the present application can also determine the content to be displayed in combination with the idle state of the currently logged-in user. Therefore, in one embodiment, the selected number of content that the currently logged-in user is not interested in is determined according to the emotion type of the currently logged-in user After that, before selecting a corresponding number of content from the content that the currently logged-in user is not interested in, and displaying the corresponding number of content and the content that the currently logged-in user is interested in, it further includes:
  • whether the currently logged-in user is in an idle state can be determined by detecting its walking rate. If the currently logged-in user's walking rate is greater than a preset rate, the currently logged-in user is in a busy state, otherwise the currently logged-in user is in an idle state.
  • whether the currently logged-in user is currently idle can be determined by the number of work-related apps running on his terminal. If the number of work-related apps running in the currently logged-in user terminal is greater than the preset number, the currently logged-in user In a busy state, otherwise the currently logged-in user is in an idle state. It can be understood that other methods may also be used to determine whether the currently logged-in user is in an idle state, which is not limited in this application.
  • the determined number of selections can be kept unchanged, or the determined number of selections can be further increased, so that the currently logged-in user can view More uninteresting content improves the effectiveness of uninteresting content display.
  • the currently logged-in user has a small probability of viewing content that is not of interest when it is not in an idle state, it is necessary to further reduce the number of certain selections at this time to avoid confusion for users and bring users a bad application experience . If the determined selection number is reduced to 0 or a negative number after being reduced by the second preset value, then the currently logged-in user is no longer shown uninterested content at this time.
  • the method before acquiring the face image of the currently logged-in user of the application program, the method further includes: The camera device captures the face image of the currently logged-in user, and uses the face image as the user identity to log in to the application; the display of the corresponding number of content and the content of interest to the currently logged-in user includes: The homepage of the program displays a corresponding amount of content and the content that the currently logged-in user is interested in.
  • this application also provides an application content display device.
  • the specific implementation of the device of this application will be described in detail below with reference to the accompanying drawings.
  • FIG. 2 it is a schematic diagram of an application content display device of an embodiment, and the device includes:
  • the face image acquisition module 210 is used to acquire the face image of the currently logged-in user of the application;
  • the emotion type recognition module 220 is configured to perform micro-expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user;
  • the preference determination module 230 is configured to determine the currently logged-in user's preference for each content in the application according to the acquired historical behavior data of the currently logged-in user in the application;
  • the content determining module 240 is configured to determine the content that is of interest to the currently logged-in user and the content that is not of interest according to the preference of the currently logged-in user for each content;
  • the selected number determining module 250 is configured to determine the selected number of content that the currently logged-in user is not interested in according to the emotion type of the currently logged-in user;
  • the display module 260 is configured to select a corresponding number of content from the content that the currently logged-in user is not interested in, and display the corresponding number of content and the content that the currently logged-in user is interested in.
  • the emotion type includes happy and unhappy; the selected number determining module 250 includes:
  • the first selection number determining module is configured to determine that the selected number of content that the currently logged-in user is not interested in is a preset first value when the emotional type of the currently logged-in user is happy;
  • the second selected number determining module is configured to determine that the selected number of content that the currently logged in user is not interested in is a preset second value when the emotion type of the currently logged in user is unhappy; the second value is less than The first value.
  • the happy emotion types are divided into different emotion intensities, and the unhappy emotion types are divided into different emotion intensities.
  • the first selection number determining module determines the happy emotional intensity of the currently logged-in user; according to the preset first correspondence between the happy emotional intensity and each selected number, the selection corresponding to the happy emotional intensity of the currently logged-in user is determined Number; the emotional intensity of happiness in the first corresponding relationship and the selected number are in a positive proportional relationship.
  • the second selected number determining module determines the emotional intensity of the currently logged-in user's unhappy mood; according to the preset second correspondence between each unhappy emotional intensity and each selected number, determines the emotional intensity unhappy with the currently logged-in user Corresponding selected number; the emotional intensity of unhappy in the second corresponding relationship and the selected number are in an inverse proportional relationship.
  • the device further includes a selection number adjustment module between the selection number determination module 250 and the display module 260, the selection number adjustment module is used to detect whether the currently logged-in user is in an idle state; In the state, keep the selected number of content that is not of interest to the currently logged-in user unchanged, or increase the selected number of content that is not of interest to the currently logged-in user by a first preset value; when not in an idle state, add The number of selected content that the currently logged-in user is not interested in is reduced by the second preset value.
  • the emotion type recognition module 220 includes:
  • the feature vector extraction module is used to extract the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead from the face image of the currently logged-in user, and the extracted features constitute the current The feature vector of the logged-in user;
  • the emotion type recognition module is used to input the feature vector of the currently logged-in user into a pre-built micro-expression recognition model, and the micro-expression recognition model recognizes the emotion type of the currently logged-in user.
  • the device of the present application further includes a micro-expression recognition model construction module, including:
  • An image and identification acquisition unit for acquiring a sample face image and an identification of the sample face image; the identification is used to characterize the emotion type of the sample face image;
  • the feature vector extraction unit is used to extract the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead from the sample face image, and the sample face image is composed of the extracted features Eigenvectors;
  • the micro-expression recognition model training unit is used to input the feature vector and identification of the sample face image into the convolutional neural network for training to obtain the micro-expression recognition model.
  • the emotion type recognition module 220 includes:
  • the feature vector extraction unit is used to extract the features of the corners of the mouth, cheeks, eyelids and the tail of the eyes from the face image of the currently logged-in user, and each extracted feature constitutes the feature vector of the currently logged-in user;
  • the similarity calculation unit is used to calculate the similarity between the feature vector of the currently logged-in user and a preset feature vector;
  • the preset feature vector includes pre-stored corners of the mouth, cheeks, and eyelids of the currently logged-in user when happy And the characteristics of the tail of the eye;
  • the emotion type determining unit is configured to determine that the emotion type of the currently logged-in user is happy when the similarity is greater than a preset threshold; otherwise, determine that the emotion type of the currently logged-in user is unhappy.
  • the preference determination module 230 includes:
  • the statistical unit is configured to calculate the number of clicks, the number of purchases, and the number of comments made by the currently logged-in user on each content according to the historical behavior data;
  • the preference determination unit is configured to respectively multiply the number of clicks, the number of purchases, and the number of comments posted on the same content by their respective weights and then sum them to obtain the currently logged-in user's preference for each content.
  • the content display device further includes a login module connected to the face image acquisition module, the login module uses a camera device to take a face image of the currently logged-in user, and uses the face image as the user Log in to the application as an identity; the display module 260 displays a corresponding amount of content and content that is of interest to the currently logged-in user on the home page of the application.
  • a login module connected to the face image acquisition module, the login module uses a camera device to take a face image of the currently logged-in user, and uses the face image as the user Log in to the application as an identity; the display module 260 displays a corresponding amount of content and content that is of interest to the currently logged-in user on the home page of the application.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the storage medium is a volatile storage medium or a non-volatile storage medium, and a computer program is stored thereon. When the program is executed by a processor, any one of the above is realized.
  • the storage medium includes but is not limited to any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM (Random AccesSS Memory), and then Memory), EPROM (EraSable Programmable Read-Only Memory), EEPROM (Electrically EraSable Programmable Read-Only Memory), flash memory, magnetic card or optical card. That is, the storage medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer). It can be a read-only memory, magnetic disk or optical disk, etc.
  • An embodiment of the present application also provides a computer device, which includes:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors implement an application content display method:
  • the content display method of the application program includes:
  • a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
  • FIG. 3 is a schematic diagram of the structure of the computer equipment of this application, including a processor 320, a storage device 330, an input unit 340, a display unit 350 and other devices.
  • the storage device 330 may be used to store the application program 310 and various functional modules.
  • the processor 320 runs the application program 310 stored in the storage device 330 to execute various functional applications and data processing of the device.
  • the storage device 330 may be an internal memory or an external memory, or include both internal memory and external memory.
  • the internal memory may include read-only memory, programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or random access memory.
  • External storage can include hard disks, floppy disks, ZIP disks, U disks, tapes, etc.
  • the storage devices disclosed in this application include but are not limited to these types of storage devices.
  • the storage device 330 disclosed in this application is merely an example and not a limitation.
  • the input unit 340 is used for receiving input of signals and receiving face images of the currently logged-in user.
  • the input unit 340 may include a touch panel and other input devices.
  • the touch panel can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc., to operate on the touch panel or near the touch panel), and according to preset
  • the program drives the corresponding connection device; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as playback control buttons, switch buttons, etc.), trackball, mouse, and joystick.
  • the display unit 350 can be used to display information input by the user or information provided to the user and various menus of the computer device.
  • the display unit 350 may take the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the processor 320 is the control center of the computer equipment. It uses various interfaces and lines to connect the various parts of the entire computer, runs or executes the software programs and/or modules stored in the storage device 330, and calls data stored in the storage device. , Perform various functions and process data.
  • the computer device includes one or more processors 320, one or more storage devices 330, and one or more application programs 310, where the one or more application programs 310 are stored in the storage device 330. It is configured to be executed by the one or more processors 320, and the one or more application programs 310 are configured to execute the content display method of the application program described in the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种应用程序的内容展示方法、装置、存储介质和计算机设备,应用于生物识别技术领域。该方法包括:获取应用程序的当前登录用户的人脸图像(S110);对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型(S120);根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度(S130);根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容(S140);根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目(S150);从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容(S160)。提高了推广信息内容展示的有效性。

Description

应用程序的内容展示方法、装置、存储介质和计算机设备
本申请要求于2019年06月19日提交中国专利局、申请号为201910532695.0,发明名称为“应用程序的内容展示方法、装置、存储介质和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能的生物识别技术领域,具体而言,本申请涉及一种应用程序的内容展示方法、装置、存储介质和计算机设备。
背景技术
APP(应用程序,Application)指的是智能设备中安装的第三方应用程序。APP的出现,完善了原始系统的不足,方便和丰富了用户的生活。一般情况下,APP除了展示基本功能相关的内容外,还会展示一些广告营销内容等推广信息。以平安人寿APP为例,该APP不仅展示有保单合同管理的内容,还会展示线上产品超市,让用户可以随时随地选购并管理各类金融产品,还会展示一些客户活动等等。但是,发明人意识到,目前市面上APP展示的内容都是预先固定的,用户有较大概率对推广信息的内容没有兴趣,甚至反感,从而导致该种推广信息的内容展示方式有效性较差。
发明内容
本申请针对现有方式的缺点,提出一种应用程序的内容展示方法、装置、存储介质和计算机设备,以提高应用程序中推广信息的内容展示有效性。
本申请的实施例根据第一个方面,提供了一种应用程序的内容展示方法,包括:
获取应用程序的当前登录用户的人脸图像;
对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣 的内容的选取数目;
从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
本申请的实施例根据第二个方面,还提供了一种应用程序的内容展示装置,包括:
人脸图像获取模块,用于获取应用程序的当前登录用户的人脸图像;
情绪类型识别模块,用于对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
偏好度确定模块,用于根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
内容确定模块,用于根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
选取数目确定模块,用于根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
展示模块,用于从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
本申请的实施例根据第三个方面,还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现一种应用程序的内容展示方法:
其中,所述应用程序的内容展示方法包括:
获取应用程序的当前登录用户的人脸图像;
对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
本申请的实施例根据第四个方面,还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现一种应用程序的内容展示方法:
其中,所述应用程序的内容展示方法包括:
获取应用程序的当前登录用户的人脸图像;
对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
上述的应用程序的内容展示方法、装置、存储介质和计算机设备,通过微表情识别出当前登录用户的情绪类型,结合当前登录用户的情绪类型以及习惯偏好,动态调整展示的内容,丰富了应用程序中推广信息的内容显示时机,这样用户有较大概率对应用程序中展示的推广信息的内容感兴趣,因此有效提高了推广信息的内容展示有效性。
本申请附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请一个实施例的应用程序的内容展示方法的示意图;
图2为本申请一个实施例的应用程序的内容展示装置的示意图;
图3为本申请一个实施例的计算机设备的示意图。
具体实施方式
如图1所示,为一实施例的应用程序的内容展示方法,该方法包括:
S110、获取应用程序的当前登录用户的人脸图像。
APP可以是视频播放类APP,也可以是保险类APP,还可以是其它类型APP。考虑到后续要获取用户对各个内容的偏好度,因此需要获取APP当前登录用户的人脸图像。获取人脸图像的时机可以是用户刚登陆APP时,也可以是APP从后台运行切换为前台运行时,也可以是APP在前台运行时每隔一段时间获取一下人脸图像。考虑到用户一般是正对着APP界面,因此可以直接调用APP所在终端的前置摄像头拍摄人脸图像,应当理解的是,本申请并不对获取人脸图像的方式进行限定。
S120、对所述当前登录用户的人脸图像进行微表情识别,确定所述当 前登录用户的情绪类型。
微表情(面部特征微表情)是心理学名词。人们通过做一些表情把内心感受表达给对方看,在人们做的不同表情之间,或是某个表情里,脸部会“泄露”出其它的信息,这个信息称之为微表情。情绪类型包括高兴、伤心、害怕、愤怒等等。通过对人脸图像进行微表情识别,能够识别出用户当前的情绪状态,如高兴、伤心等等。
S130、根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度。
历史行为数据为当前登录用户之前对APP中各个内容的操作行为数据,例如,对内容的点击,对内容所展示产品的购买,对内容的评论等等。可以通过爬虫技术从该APP对应的服务器中获取当前登录用户的历史行为数据。为了便于计算,可以仅统计当前登录用户在最近几天的历史行为数据。
偏好度用于表征当前登录用户对各个内容的感兴趣程度,或者各个内容是否符合用户需要的程度,或者说各个内容对用户的重要程度。可选的,如果当前登录用户为老客户,可以根据历史行为数据预先计算出当前登录用户对各个内容的偏好度,在进行内容推荐时直接调取这部分数据;如果当前登录用户为新客户,将当前登录用户注册时选择的内容的偏好度设置为1,将未选择的内容的偏好度均设置为0。
S140、根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容。
可以预先设置阈值,当偏好度大于该阈值时,确定该内容为当前登录用户感兴趣的内容,当偏好度小于等于该阈值时,确定该内容为当前登录用户不感兴趣的内容。可以将筛选出的感兴趣的内容构成一个集合,将不感兴趣的内容构成另一个集合。
S150、根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目。
考虑到用户一般对利润活动、节日促销活动等推广信息内容的兴趣较低,如果内容固定在APP中显示,则很大可能造成用户的反感,推广信息内容展示的有效性也较低,因此本申请根据当前登录用户的情绪类型,确定出需要展示的当前登录用户不感兴趣的内容的数量,从而动态调整APP当前展示不感兴趣的内容。
S160、从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
本申请丰富了应用程序中推广信息的内容显示时机,这样用户有较大概率对应用程序中展示的推广信息的内容感兴趣,因此有效提高了推广信息的内容展示有效性,同时也避免了屏幕显示空间的浪费。
通过微表情识别出情绪类型的方式有很多,下面结合两个实施例进行介绍,应当理解的是,本申请并不限制于下述两种方式。
在一个实施例中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
S1201、从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量。
本申请的发明人经研究发现,嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征能够很好的表征用户的情绪类型,因此可以通过现有技术中已有的方式从人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,这些特征构成当前登录用户的特征向量。
S1202、将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
预先训练微表情识别模型,微表情识别模型用于根据人脸图像的特征向量计算出情绪类型。在一个实施例中,所述微表情识别模型通过以下方式构建:获取样本人脸图像以及所述样本人脸图像的标识;所述标识用于表征所述样本人脸图像的情绪类型;从所述样本人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述样本人脸图像的特征向量;将所述样本人脸图像的特征向量以及标识输入卷积神经网络进行训练,获得所述微表情识别模型,具体的,标识用于确定卷积神经网络的误差,根据该误差可以进一步调整卷积神经网络的参数,直至卷积神经网络的精确度大于预设值,此时得到的卷积神经网络即为微表情识别模型。
构建好微表情识别模型后,将当前登录用户的特征向量输入该微表情识别模型,则该微表情识别模型就输出当前登录用户的情绪类型。
在另一个实施例中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
S120a、从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量。
考虑到移动终端的计算能力有限,由于广告等内容基本上为用户不太感兴趣的内容,本申请主要考虑如何在恰当的时机在APP中展示广告等内容,从而提高业务的转换率,因此可以仅简单识别出高兴的情绪类型。本申请的发明人经研究发现,人高兴时的面部动作一般包括:嘴角翘起,面颊上抬起皱,眼睑收缩以及眼睛尾部会形成鱼尾纹,因此可以从人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成当前登录用户的特征向量,具体提取的方式可以通过现有技术中已有的方式实现。
S120b、计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征。
预先存储各个用户高兴时嘴角、面颊、眼睑和眼睛尾部的特征。在需要确定当前登录用户的情绪类型时,调取预先存储的当前登录用户的高兴时嘴角、面颊、眼睑和眼睛尾部的特征,将调取的这些特征与当前登录用户的特征向量中对应的特征进行相似度计算,具体相似度计算的方式可以根据现有技术中已有的方式实现。
S120c、若相似度大于预设阈值,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
预设阈值可以根据实际需要进行设置。计算出相似度后,如果相似度大于预设阈值,则表示当前登录用户处于高兴状态,否则,则表示当前登录用户处于不高兴状态。
可以根据收集的当前登录用户在该APP中对各个内容的历史行为数据得到当前登录用户对APP中各个内容的偏好度,在一个实施例中,所述根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度,包括:
S1301、根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数。
根据历史行为数据,统计出点击次数、购买次数和发表评论次数,具体统计的方式可以根据现有技术中已有的方式实现。
S1302、分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
分别为点击次数、购买次数和发表评论次数设置权重。为了提高偏好度计算的准确度,在一个实施例中,所述点击次数的权重、所述发表评论次数的权重和所述购买次数的权重依次增高,也即是点击次数的权重小于发表评论次数的权重,发表评论次数的权重小于购买次数的权重。
当前登录用户对某一个内容的偏好度通过以下公式计算:点击次数的权重*点击次数+购买次数的权重*购买次数+发表评论次数的权重*发表评论次数,该公式中的点击次数为当前登录用户对该内容的点击次数,购买次数为当前登录用户对该内容的购买次数,发表评论次数为当前登录用户对该内容的发表评论次数。
结合情绪类型确定当前登录用户不感兴趣的内容的选取数目的方式有很多,考虑到人的情绪大体上可以分为高兴和不高兴,因此为了提高计算的效率,在一个实施例中,所述情绪类型包括高兴和不高兴;所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:
S1501、当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值。
考虑到当前登录用户高兴时,如果展示当前登录用户不太感兴趣的内容,例如广告促销等推广信息,当前登录用户有比较大的概率点击查看,更有可能购买,这样可以提高推广信息内容展示的有效性,因此在该种情 形下,不仅可以展示当前登录用户感兴趣的内容,还可以展示较多的当前登录用户不感兴趣的内容。第一数值一般为正整数,可以根据实际需要进行设定,只要大于后面的第二数值即可。
确定第一数值后,从当前登录用户所有不感兴趣的内容中选取数量为该第一数值的内容。可以根据自定义的规则进行选取,例如按照偏好度从高到低的顺序选取排序靠前的前N(N为第一数值)个内容,或者选取偏好度在一定范围内的N个内容等等。
S1502、当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
考虑到当前登录用户不高兴时,如果展示当前登录用户不太感兴趣的内容,例如广告促销等推广信息,则很大概率引起当前登录用户反感,甚至会影响APP的留存率,因此在该种情形下,仅展示较少的当前登录用户不感兴趣的内容,或者不再展示当前登录用户不感兴趣的内容。
确定第一数值后,从当前登录用户所有不感兴趣的内容中选取数量为该第二数值的内容。可以根据自定义的规则进行选取,例如按照偏好度从高到低的顺序选取排序靠前的前M(M为第二数值)个内容,或者选取偏好度在一定范围内的M个内容等等。
为了进一步提高推广信息内容展示的有效性,在一个实施例中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度。情绪强度用于表征情绪的强烈程度,情绪强度划分的等级可以实际需要进行确定,例如,高兴的情绪类型的情绪强度从高到低为狂欢和开心等,不高兴的情绪类型的情绪强度从高到低为绝望、悲痛、伤感、同情和镇静等。
基于高兴的情绪类型被划分的情绪强度,在一个实施例中,所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:
S1501a、确定所述当前登录用户高兴的情绪强度。
预先设置高兴的情绪类型被划分的各个情绪强度对应的微表情特征,根据当前登录用户人脸图像的微表情识别结果,确定出当前登录用户的情绪类型为高兴以及高兴的情绪强度。需要说明的是,当微表情特征有多个时,只要当前登录用户的微表情识别结果中大于预设数目的微表情特征与预先设置的情绪强度对应的微表情特征匹配即可。
例如,预先设置的高兴的情绪类型被划分为狂欢和开心两个情绪强度,狂欢对应的嘴唇的开合角度为A~B,开心对应的嘴唇的开合角度为C~D,当前登录用户的嘴唇的开合角度为E,E属于A~B之间,因此当前登录用户高兴的情绪强度为狂欢。
S1501b、根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第 一对应关系中高兴的情绪强度与选取数目为正比例关系,即高兴的情绪强度越高,当前登录用户不感兴趣的内容选取的数量越多。
预先设置各个高兴的情绪强度对应的选取数目,确定出当前登录用户高兴的情绪强度后,根据该对应关系,即可以找到当前登录用户需要从不感兴趣的内容中选取内容的数目,即选取数目。
例如,预先设置的高兴的情绪类型被划分为狂欢和开心两个情绪强度,狂欢对应的选取数目为50个,开心对应的选取数目为40个,确定出当前登录用户高兴的情绪强度为狂欢,则选取数目为50个,当前登录用户需要从不感兴趣的内容中选取50个内容。
基于不高兴的情绪类型被划分的情绪强度,在一个实施例中,所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值,包括:
S1502a、确定所述当前登录用户不高兴的情绪强度。
预先设置不高兴的情绪类型被划分的各个情绪强度对应的微表情特征,根据当前登录用户人脸图像的微表情识别结果,确定出当前登录用户的情绪类型为不高兴以及不高兴的情绪强度。需要说明的是,当微表情特征有多个时,只要当前登录用户的微表情识别结果中大于预设数目的微表情特征与预先设置的情绪强度对应的微表情特征匹配即可。
例如,预先设置的不高兴的情绪类型被划分为绝望、悲痛、伤感、同情和镇静,绝望对应的嘴唇的开合角度为a~b,悲痛对应的嘴唇的开合角度为c~d,伤感对应的嘴唇的开合角度为e~f,同情对应的嘴唇的开合角度为g~h,镇静对应的嘴唇的开合角度为i~j,当前登录用户的嘴唇的开合角度为k,k属于e~f之间,因此当前登录用户不高兴的情绪强度为伤感。
S1502b、根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系,即不高兴的情绪强度越高,当前登录用户不感兴趣的内容选取的数量越少。
预先设置各个不高兴的情绪强度对应的选取数目,确定出当前登录用户不高兴的情绪强度后,根据该对应关系,即可以找到当前登录用户需要从不感兴趣的内容中选取内容的数目,即选取数目。
例如,预先设置的不高兴的情绪类型被划分为绝望、悲痛、伤感、同情和镇静,绝望对应的选取数目为6,悲痛对应的选取数目为5,伤感对应的选取数目为4,同情对应的选取数目为3,镇静对应的选取数目为2,确定出当前登录用户不高兴的情绪强度为伤感,则选取数目为4个,当前登录用户需要从不感兴趣的内容中选取4个内容。
本申请还可以再结合当前登录用户的空闲状态确定展示的内容,因此,在一个实施例中,所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登 录用户感兴趣的内容之前,还包括:
S151、检测所述当前登录用户是否处于空闲状态。
检测当前登录用户是否处于空闲状态有多种方式。例如,当前登录用户当前是否处于空闲状态可以通过检测其行走的速率确定,如果当前登录用户行走的速率大于预设速率,则当前登录用户处于繁忙状态,否则当前登录用户处于空闲状态。又例如,当前登录用户当前是否处于空闲状态可以通过其终端运行的与工作相关的APP的数量确定,如果当前登录用户终端中运行的与工作相关的APP的数量大于预设数量,则当前登录用户处于繁忙状态,否则当前登录用户处于空闲状态。可以理解,还可以采用其它方式确定当前登录用户是否处于空闲状态,本申请并不对此做出限定。
S152、若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值。
考虑到空闲状态时当前登录用户有更大的概率查看不感兴趣的内容,因此在此时机下,可以保持确定的选取数目不变,也可以进一步增加确定的选取数目,以使当前登录用户查看到更多不感兴趣的内容,提高不感兴趣的内容展示的有效性。
S153、若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
考虑到不是空闲状态时当前登录用户有很小的概率查看不感兴趣的内容,因此在此时机下,需要进一步降低确定的选取数目,以避免造成用户的困扰,给用户带来不好的应用体验。如果确定的选取数目降低第二预设值后为0或者负数,则此时不再向当前登录用户展示不感兴趣的内容。
考虑到应用程序采用人脸登录的情况,为了使应用程序内容适应当前登录用户的状态,在一个实施例中,所述获取所述应用程序的当前登录用户的人脸图像之前,还包括:使用摄像装置拍摄当前登录用户的人脸图像,使用所述人脸图像作为用户身份登录所述应用程序;所述展示相应数目的内容以及所述当前登录用户感兴趣的内容,包括:在所述应用程序的首页展示相应数目的内容以及所述当前登录用户感兴趣的内容。
基于同一发明构思,本申请还提供一种应用程序的内容展示装置,下面结合附图对本申请装置的具体实施方式进行详细介绍。
如图2所示,为一实施例的应用程序的内容展示装置的示意图,该装置包括:
人脸图像获取模块210,用于获取应用程序的当前登录用户的人脸图像;
情绪类型识别模块220,用于对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
偏好度确定模块230,用于根据获取的所述当前登录用户在所述应用 程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
内容确定模块240,用于根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
选取数目确定模块250,用于根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
展示模块260,用于从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
在一个实施例中,所述情绪类型包括高兴和不高兴;所述选取数目确定模块250包括:
第一选取数目确定模块,用于当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;
第二选取数目确定模块,用于当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
在一个实施例中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度。第一选取数目确定模块确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系。第二选取数目确定模块确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
在一个实施例中,所述装置还包括选取数目确定模块250和展示模块260之间的选取数目调整模块,所述选取数目调整模块用于检测所述当前登录用户是否处于空闲状态;在处于空闲状态时,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;在不处于空闲状态时,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
在一个实施例中,情绪类型识别模块220包括:
特征向量提取模块,用于从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;
情绪类型识别模块,用于将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
在一个实施例中,本申请装置还包括微表情识别模型构建模块,包括:
图像和标识获取单元,用于获取样本人脸图像以及所述样本人脸图像 的标识;所述标识用于表征所述样本人脸图像的情绪类型;
特征向量提取单元,用于从所述样本人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述样本人脸图像的特征向量;
微表情识别模型训练单元,用于将所述样本人脸图像的特征向量以及标识输入卷积神经网络进行训练,获得所述微表情识别模型。
在另一个实施例中,情绪类型识别模块220包括:
特征向量提取单元,用于从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量;
相似度计算单元,用于计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征;
情绪类型确定单元,用于在相似度大于预设阈值时,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
在一个实施例中,偏好度确定模块230包括:
统计单元,用于根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数;
偏好度确定单元,用于分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
在一个实施例中,所述内容展示装置还包括与所述人脸图像获取模块相连的登录模块,所述登录模块使用摄像装置拍摄当前登录用户的人脸图像,使用所述人脸图像作为用户身份登录所述应用程序;所述展示模块260在所述应用程序的首页展示相应数目的内容以及所述当前登录用户感兴趣的内容。
上述应用程序的内容展示装置的其它技术特征与上述内容展示方法的技术特征相同,在此不予赘述。
本申请实施例还提供一种计算机可读存储介质,所述存储介质为易失性存储介质或非易失性存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任意一项所述的应用程序的内容展示方法。其中,所述存储介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM(Read-Only Memory,只读存储器)、RAM(Random AcceSS Memory,随即存储器)、EPROM(EraSable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically EraSable Programmable Read-Only Memory,电可擦可编程只读存储器)、闪存、磁性卡片或光线卡片。也就是,存储介质包括由设备(例如,计算机)以能 够读的形式存储或传输信息的任何介质。可以是只读存储器,磁盘或光盘等。
本申请实施例还提供一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现一种应用程序的内容展示方法:
其中,所述应用程序的内容展示方法包括:
获取应用程序的当前登录用户的人脸图像;
对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
图3为本申请计算机设备的结构示意图,包括处理器320、存储装置330、输入单元340以及显示单元350等器件。本领域技术人员可以理解,图3示出的结构器件并不构成对所有计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件。存储装置330可用于存储应用程序310以及各功能模块,处理器320运行存储在存储装置330的应用程序310,从而执行设备的各种功能应用以及数据处理。存储装置330可以是内存储器或外存储器,或者包括内存储器和外存储器两者。内存储器可以包括只读存储器、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦写可编程ROM(EEPROM)、快闪存储器、或者随机存储器。外存储器可以包括硬盘、软盘、ZIP盘、U盘、磁带等。本申请所公开的存储装置包括但不限于这些类型的存储装置。本申请所公开的存储装置330只作为例子而非作为限定。
输入单元340用于接收信号的输入,以及接收当前登录用户的人脸图像等。输入单元340可包括触控面板以及其它输入设备。触控面板可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作),并根据预先设定的程序驱动相应的连接装置;其它输入设备可以包括但不限于物理键盘、功能键(比如播放控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。显示单元350可用于显示用户输入的信息或提供给用户的信息 以及计算机设备的各种菜单。显示单元350可采用液晶显示器、有机发光二极管等形式。处理器320是计算机设备的控制中心,利用各种接口和线路连接整个电脑的各个部分,通过运行或执行存储在存储装置330内的软件程序和/或模块,以及调用存储在存储装置内的数据,执行各种功能和处理数据。
在一实施方式中,计算机设备包括一个或多个处理器320,以及一个或多个存储装置330,一个或多个应用程序310,其中所述一个或多个应用程序310被存储在存储装置330中并被配置为由所述一个或多个处理器320执行,所述一个或多个应用程序310配置用于执行以上实施例所述的应用程序的内容展示方法。

Claims (20)

  1. 一种应用程序的内容展示方法,其中,包括:
    获取应用程序的当前登录用户的人脸图像;
    对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
    根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
    根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
    根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
    从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
  2. 根据权利要求1所述的应用程序的内容展示方法,其中,所述情绪类型包括高兴和不高兴;
    所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:
    当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;
    当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
  3. 根据权利要求2所述的应用程序的内容展示方法,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;
    所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;
    所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
  4. 根据权利要求1所述的应用程序的内容展示方法,其中,所述根 据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:
    检测所述当前登录用户是否处于空闲状态;
    若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;
    若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
  5. 根据权利要求1所述的应用程序的内容展示方法,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
    从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;
    将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
  6. 根据权利要求1所述的应用程序的内容展示方法,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
    从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量;
    计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征;
    若相似度大于预设阈值,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
  7. 根据权利要求1至6任意一项所述的应用程序的内容展示方法,其中,所述根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度,包括:
    根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数;
    分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
  8. 一种应用程序的内容展示装置,其中,包括:
    人脸图像获取模块,用于获取应用程序的当前登录用户的人脸图像;
    情绪类型识别模块,用于对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
    偏好度确定模块,用于根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
    内容确定模块,用于根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
    选取数目确定模块,用于根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
    展示模块,用于从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现一种应用程序的内容展示方法:
    其中,所述应用程序的内容展示方法包括:
    获取应用程序的当前登录用户的人脸图像;
    对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
    根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
    根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
    根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
    从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
  10. 根据权利要求9所述的计算机可读存储介质,其中,所述情绪类型包括高兴和不高兴;
    所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:
    当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;
    当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
  11. 根据权利要求10所述的计算机可读存储介质,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;
    所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数 目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;
    所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
  12. 根据权利要求9所述的计算机可读存储介质,其中,所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:
    检测所述当前登录用户是否处于空闲状态;
    若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;
    若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
  13. 根据权利要求9所述的计算机可读存储介质,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
    从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;
    将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
  14. 根据权利要求9所述的计算机可读存储介质,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
    从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量;
    计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征;
    若相似度大于预设阈值,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
  15. 根据权利要求9至14任意一项所述的计算机可读存储介质,其中,所述根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度,包括:
    根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数;
    分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
  16. 一种计算机设备,其中,所述计算机设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现一种应用程序的内容展示方法:
    其中,所述应用程序的内容展示方法包括:
    获取应用程序的当前登录用户的人脸图像;
    对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;
    根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;
    根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;
    根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;
    从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
  17. 根据权利要求16所述的计算机设备,其中,所述情绪类型包括高兴和不高兴;
    所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:
    当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;
    当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
  18. 根据权利要求17所述的计算机设备,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;
    所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;
    所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录 用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
  19. 根据权利要求16所述的计算机设备,其中,所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:
    检测所述当前登录用户是否处于空闲状态;
    若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;
    若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
  20. 根据权利要求16所述的计算机设备,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:
    从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;
    将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
PCT/CN2020/086149 2019-06-19 2020-04-22 应用程序的内容展示方法、装置、存储介质和计算机设备 WO2020253360A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910532695.0 2019-06-19
CN201910532695.0A CN110389662A (zh) 2019-06-19 2019-06-19 应用程序的内容展示方法、装置、存储介质和计算机设备

Publications (1)

Publication Number Publication Date
WO2020253360A1 true WO2020253360A1 (zh) 2020-12-24

Family

ID=68285611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086149 WO2020253360A1 (zh) 2019-06-19 2020-04-22 应用程序的内容展示方法、装置、存储介质和计算机设备

Country Status (2)

Country Link
CN (1) CN110389662A (zh)
WO (1) WO2020253360A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971870A (zh) * 2024-02-05 2024-05-03 江苏顶石工业科技有限公司 基于动态查询sql创建视图作为配置展现方式的方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110389662A (zh) * 2019-06-19 2019-10-29 深圳壹账通智能科技有限公司 应用程序的内容展示方法、装置、存储介质和计算机设备
CN113590926A (zh) * 2020-04-30 2021-11-02 北京爱笔科技有限公司 用户兴趣的识别方法、装置、设备及计算机可读介质
CN111738362B (zh) * 2020-08-03 2020-12-01 成都睿沿科技有限公司 对象识别方法及装置、存储介质及电子设备
CN111860454B (zh) * 2020-08-04 2024-02-09 北京深醒科技有限公司 一种基于人脸识别的模型切换算法
CN113014980B (zh) * 2021-02-23 2023-07-18 北京字跳网络技术有限公司 远程控制方法、装置和电子设备
CN113222712A (zh) * 2021-05-31 2021-08-06 中国银行股份有限公司 一种产品推荐方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108123972A (zh) * 2016-11-28 2018-06-05 腾讯科技(北京)有限公司 多媒体文件的分配方法及装置
CN109559193A (zh) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 智能识别的产品推送方法、装置、计算机设备及介质
CN109785045A (zh) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 一种基于用户行为数据的推送方法和装置
CN110389662A (zh) * 2019-06-19 2019-10-29 深圳壹账通智能科技有限公司 应用程序的内容展示方法、装置、存储介质和计算机设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343089A (zh) * 2017-06-20 2017-11-10 广东欧珀移动通信有限公司 信息推送方法及相关产品
CN108255307A (zh) * 2018-02-08 2018-07-06 竹间智能科技(上海)有限公司 基于多模态情绪与脸部属性识别的人机交互方法、系统
CN109559208B (zh) * 2019-01-04 2022-05-03 平安科技(深圳)有限公司 一种信息推荐方法、服务器及计算机可读介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108123972A (zh) * 2016-11-28 2018-06-05 腾讯科技(北京)有限公司 多媒体文件的分配方法及装置
CN109559193A (zh) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 智能识别的产品推送方法、装置、计算机设备及介质
CN109785045A (zh) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 一种基于用户行为数据的推送方法和装置
CN110389662A (zh) * 2019-06-19 2019-10-29 深圳壹账通智能科技有限公司 应用程序的内容展示方法、装置、存储介质和计算机设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971870A (zh) * 2024-02-05 2024-05-03 江苏顶石工业科技有限公司 基于动态查询sql创建视图作为配置展现方式的方法

Also Published As

Publication number Publication date
CN110389662A (zh) 2019-10-29

Similar Documents

Publication Publication Date Title
WO2020253360A1 (zh) 应用程序的内容展示方法、装置、存储介质和计算机设备
WO2020238023A1 (zh) 信息推荐方法、装置、终端及存储介质
US11887352B2 (en) Live streaming analytics within a shared digital environment
US11393133B2 (en) Emoji manipulation using machine learning
US10019653B2 (en) Method and system for predicting personality traits, capabilities and suggested interactions from images of a person
WO2020253372A1 (zh) 基于大数据分析的信息推送方法、装置、设备及存储介质
US11056225B2 (en) Analytics for livestreaming based on image analysis within a shared digital environment
WO2020143156A1 (zh) 热点视频标注处理方法、装置、计算机设备及存储介质
US10289898B2 (en) Video recommendation via affect
US20170109571A1 (en) Image analysis using sub-sectional component evaluation to augment classifier usage
US10474875B2 (en) Image analysis using a semiconductor processor for facial evaluation
WO2018033154A1 (zh) 手势控制方法、装置和电子设备
US20170098122A1 (en) Analysis of image content with associated manipulation of expression presentation
CN109996091A (zh) 生成视频封面的方法、装置、电子设备和计算机可读存储介质
US20170095192A1 (en) Mental state analysis using web servers
TW202044066A (zh) 計算人類使用者的真實性的運算系統與方法以及決定貸款申請者的真實性的方法
US20140016860A1 (en) Facial analysis to detect asymmetric expressions
JP2019530041A (ja) 検索クエリに基づいたソース画像の顔とターゲット画像との結合
US11430561B2 (en) Remote computing analysis for cognitive state data metrics
JP2013114689A (ja) 対話型広告のための使用測定技法およびシステム
US11151385B2 (en) System and method for detecting deception in an audio-video response of a user
US20150186912A1 (en) Analysis in response to mental state expression requests
Wu et al. Understanding and modeling user-perceived brand personality from mobile application uis
US11762900B2 (en) Customized selection of video thumbnails to present on social media webpages
CN110046955A (zh) 基于人脸识别的营销方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20826787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20826787

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 29/03/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20826787

Country of ref document: EP

Kind code of ref document: A1