WO2020253360A1 - 应用程序的内容展示方法、装置、存储介质和计算机设备 - Google Patents
应用程序的内容展示方法、装置、存储介质和计算机设备 Download PDFInfo
- Publication number
- WO2020253360A1 WO2020253360A1 PCT/CN2020/086149 CN2020086149W WO2020253360A1 WO 2020253360 A1 WO2020253360 A1 WO 2020253360A1 CN 2020086149 W CN2020086149 W CN 2020086149W WO 2020253360 A1 WO2020253360 A1 WO 2020253360A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- currently logged
- content
- interested
- selected number
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Definitions
- This application relates to the field of artificial intelligence biometrics technology. Specifically, this application relates to a method, device, storage medium, and computer equipment for displaying application content.
- APP Application refers to a third-party application installed in a smart device.
- the emergence of APP has perfected the deficiencies of the original system, convenient and enriched users' lives.
- APP also displays some promotional information such as advertising and marketing content. Take the Ping An Life APP as an example.
- the APP not only displays the contents of insurance policy contract management, but also displays online product supermarkets, allowing users to purchase and manage various financial products anytime and anywhere, and also displays some customer activities and so on.
- the inventor realizes that the content displayed by APPs on the market is fixed in advance, and there is a high probability that users are not interested in, or even disgusted with, the content of the promotion information, which leads to the poor effectiveness of this kind of promotion information content display method. .
- this application proposes an application content display method, device, storage medium, and computer equipment to improve the effectiveness of the content display of the promotion information in the application.
- the embodiments of the present application provide a method for displaying content of an application program, including:
- the emotion type of the currently logged-in user determine the selected number of content that the currently logged-in user is not interested in;
- a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
- an application content display device including:
- the face image acquisition module is used to acquire the face image of the currently logged-in user of the application
- An emotion type recognition module configured to perform micro expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user;
- a preference determination module configured to determine the currently logged-in user's preference for each content in the application according to the acquired historical behavior data of the currently logged-in user in the application;
- the content determination module is configured to determine the content that is of interest to and that is not of interest to the currently logged-in user according to the preference of the currently logged-in user for each content;
- the selected number determining module is configured to determine the selected number of content that the currently logged-in user is not interested in according to the emotion type of the currently logged-in user;
- the display module is used to select a corresponding number of content from the content that the currently logged-in user is not interested in, and display the corresponding number of content and the content that the currently logged-in user is interested in.
- the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, a method for displaying the content of an application program is implemented:
- the content display method of the application program includes:
- a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
- the embodiments of the present application also provide a computer device, the computer device including:
- One or more processors are One or more processors;
- Storage device for storing one or more programs
- the one or more processors implement an application content display method:
- the content display method of the application program includes:
- a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
- the above-mentioned application content display method, device, storage medium and computer equipment recognize the emotion type of the currently logged-in user through micro-expressions, combine the emotion type and habit preference of the currently logged-in user, dynamically adjust the displayed content, and enrich the application
- the user has a greater probability of being interested in the content of the promotion information displayed in the application, thus effectively improving the effectiveness of the content display of the promotion information.
- FIG. 1 is a schematic diagram of a content display method of an application program according to an embodiment of the application
- FIG. 2 is a schematic diagram of an application content display device according to an embodiment of the application
- Fig. 3 is a schematic diagram of a computer device according to an embodiment of the application.
- FIG. 1 it is an embodiment of an application content display method, and the method includes:
- the APP can be a video playback APP, an insurance APP, or other types of apps. Considering that the user's preference for each content needs to be obtained later, it is necessary to obtain the face image of the user currently logged in to the APP.
- the time to obtain the face image can be when the user just logs in to the app, when the app is switched from running in the background to the foreground, or when the app is running in the foreground to obtain the facial image at regular intervals.
- the front camera of the terminal where the APP is located can be directly called to take the face image. It should be understood that this application does not limit the way to obtain the face image.
- S120 Perform micro-expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user.
- Micro-expression is a psychological term. People express their inner feelings to each other by making some facial expressions. Between different expressions that people make, or in a certain expression, other information will be "leaked" on the face. This information is called micro-expressions. Emotion types include happiness, sadness, fear, anger, etc. By performing micro-expression recognition on facial images, the user's current emotional state, such as happy, sad, etc., can be identified.
- the historical behavior data is the operation behavior data of each content in the APP before the currently logged in user, for example, clicks on the content, purchases the products displayed by the content, comments on the content, and so on.
- the historical behavior data of the currently logged-in user can be obtained from the server corresponding to the APP through crawler technology. For the convenience of calculation, only the historical behavior data of the currently logged-in user in the last few days can be counted.
- Preference is used to characterize the degree of interest of the currently logged-in user in each content, or whether each content meets the needs of the user, or the importance of each content to the user.
- the current logged-in user if the currently logged-in user is an old customer, the current logged-in user’s preference for each content can be pre-calculated based on historical behavior data, and this part of the data can be directly retrieved when content recommendation; if the currently logged-in user is a new customer, Set the preference of the content selected by the currently logged-in user during registration to 1, and set the preference of the unselected content to 0.
- S140 Determine the content that is of interest to the currently logged-in user and the content that is not of interest according to the preference of the currently logged-in user for each content.
- a threshold can be preset. When the preference is greater than the threshold, it is determined that the content is of interest to the currently logged-in user, and when the preference is less than or equal to the threshold, it is determined that the content is not of interest to the currently logged-in user.
- the content of interest that is filtered out can be formed into one set, and the content that is not of interest can be formed into another set.
- the application determines the amount of content that the currently logged-in user is not interested in that needs to be displayed according to the emotion type of the currently logged-in user, so as to dynamically adjust the APP currently displaying uninterested content.
- This application enriches the content display timing of the promotion information in the application, so that users have a greater probability of being interested in the content of the promotion information displayed in the application, thus effectively improving the effectiveness of the content display of the promotion information, and at the same time avoiding the screen The waste of display space.
- the performing micro-expression recognition on the face image of the currently logged-in user to determine the emotion type of the currently logged-in user includes:
- the inventors of the present application have discovered through research that the characteristics of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead can well characterize the user’s emotional type, and therefore can be used in the existing technology.
- the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead are extracted from the face image. These features constitute the feature vector of the currently logged-in user.
- S1202. Input the feature vector of the currently logged-in user into a pre-built micro-expression recognition model, and the micro-expression recognition model recognizes the emotion type of the currently logged-in user.
- the micro-expression recognition model which is used to calculate the emotion type according to the feature vector of the face image.
- the micro-expression recognition model is constructed in the following manner: obtaining a sample face image and an identification of the sample face image; the identification is used to characterize the emotion type of the sample face image; The features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead are extracted from the sample face image, and the feature vectors of the sample face image are formed by the extracted features; The feature vector of the face image and the identifier are input to the convolutional neural network for training to obtain the micro-expression recognition model.
- the identifier is used to determine the error of the convolutional neural network, and the parameters of the convolutional neural network can be further adjusted according to the error. Until the accuracy of the convolutional neural network is greater than the preset value, the convolutional neural network obtained at this time is the micro-expression recognition model.
- the feature vector of the currently logged-in user is input into the micro-expression recognition model, and the micro-expression recognition model outputs the emotion type of the currently logged-in user.
- the performing micro-expression recognition on the face image of the currently logged-in user to determine the emotion type of the currently logged-in user includes:
- S120a Extract the features of the corners of the mouth, cheeks, eyelids and the tail of the eyes from the face image of the currently logged-in user, and each extracted feature constitutes a feature vector of the currently logged-in user.
- this application mainly considers how to display advertisements and other content in the APP at the right time to increase the conversion rate of the business. Only simply identify the emotional type of happiness.
- the inventors of the present application have discovered through research that the facial movements when people are happy generally include: mouth corners are raised, cheeks are wrinkled, eyelids shrink, and crow's feet are formed at the tail of the eyes. Therefore, the corners of the mouth, cheeks, and cheeks can be extracted from the face image.
- the features of the eyelid and the tail of the eye are extracted from each feature to form the feature vector of the currently logged-in user, and the specific extraction method can be implemented in the existing method in the prior art.
- the preset feature vector includes pre-stored features of the corners of the mouth, cheeks, eyelids, and tail of the eyes of the currently logged-in user when happy .
- the features of the corners of the mouth, cheeks, eyelids, and the tail of the eyes when each user is happy are stored in advance.
- retrieve the pre-stored features of the mouth, cheeks, eyelids and tail of the eyes of the currently logged-in user when they are happy, and retrieve these features with the corresponding features in the feature vector of the currently logged-in user The similarity calculation is performed, and the specific similarity calculation method can be implemented according to an existing method in the prior art.
- S120c If the similarity is greater than a preset threshold, determine that the emotion type of the currently logged-in user is happy; otherwise, determine that the emotion type of the currently logged-in user is unhappy.
- the preset threshold can be set according to actual needs. After the similarity is calculated, if the similarity is greater than the preset threshold, it means that the currently logged-in user is in a happy state; otherwise, it indicates that the currently logged-in user is in an unhappy state.
- the preference of the currently logged-in user for each content in the APP can be obtained according to the collected historical behavior data of the currently logged-in user for each content in the APP.
- the currently logged-in user is in the The historical behavior data in the application to determine the currently logged-in user's preference for each content in the application includes:
- S1301 calculate the number of clicks, the number of purchases, and the number of comments made by the currently logged-in user on each content.
- the number of clicks, the number of purchases, and the number of comments posted are counted.
- the specific counting method can be implemented according to the existing method in the prior art.
- the weight of the number of clicks, the weight of the number of comments, and the weight of the number of purchases are sequentially increased, that is, the weight of the number of clicks is less than the number of comments.
- the weight of the number of comments posted is less than the weight of the number of purchases.
- the currently logged-in user’s preference for a certain content is calculated by the following formula: the weight of the number of clicks * the number of clicks + the weight of the number of purchases * the number of purchases + the weight of the number of comments * the number of comments, the number of clicks in this formula is the current login The number of times the user clicks on the content, the number of purchases is the number of purchases of the content by the currently logged-in user, and the number of comments posted is the number of comments the currently logged-in user has made on the content.
- the emotion The types include happy and unhappy; the determining the selected number of content that the currently logged-in user is not interested in according to the emotional type of the currently logged-in user includes:
- the first value is generally a positive integer and can be set according to actual needs, as long as it is greater than the second value that follows.
- the content whose quantity is the first value is selected from all the content that is not of interest to the currently logged-in user. It can be selected according to custom rules, such as selecting the top N content (N is the first value) in the order of preference from high to low, or selecting N content with preference within a certain range, etc. .
- the content of the second value is selected from all the content that is not of interest to the currently logged-in user. It can be selected according to custom rules, such as selecting the top M content (M is the second value) in the order of preference from high to low, or selecting M content with preference within a certain range, etc. .
- the happy emotion types are divided into different emotion intensities, and the unhappy emotion types are divided into different emotion intensities.
- Emotion intensity is used to represent the intensity of emotion.
- the level of emotional intensity can actually be determined. For example, the emotional intensity of happy emotional types is from high to low, such as carnival and happy, and the emotional intensity of unhappy emotional types is from high. Despair, grief, sadness, sympathy and composure are low.
- the emotion type of the currently logged-in user is happy, it is determined that the selected number of content that the currently logged-in user is not interested in is a preset first A value, including:
- the micro-expression characteristics corresponding to each emotional intensity of the happy emotion type Pre-set the micro-expression characteristics corresponding to each emotional intensity of the happy emotion type, and determine the emotion type of the currently logged-in user as happy and happy emotion intensity according to the micro-expression recognition result of the face image of the currently logged-in user. It should be noted that, when there are multiple micro-expression features, as long as the micro-expression features larger than the preset number in the micro-expression recognition result of the currently logged-in user match the micro-expression features corresponding to the preset emotional intensity.
- the preset happy mood type is divided into two emotional intensities: carnival and happy.
- the opening and closing angles of the lips corresponding to carnival are from A to B, and the opening and closing angles of the lips corresponding to happy are from C to D.
- the opening and closing angle of the lips is E, and E is between A and B, so the emotional intensity of the currently logged-in user's happiness is carnival.
- S1501b Determine the selected number corresponding to the happy emotional intensity of the currently logged-in user according to the preset first corresponding relationship between each happy emotional intensity and each selected number; the happy emotional intensity and selection in the first corresponding relationship
- the number is proportional to the relationship, that is, the higher the emotional intensity of happiness, the greater the number of content selections that the currently logged-in user is not interested in.
- the number of selections corresponding to the intensity of each happy emotion is preset, and after the emotional intensity of the currently logged-in user is determined, according to the corresponding relationship, the number of content that the currently logged-in user needs to select from content that is not of interest can be found, that is, the number of selections.
- the preset emotional type of happiness is divided into two emotional intensities: carnival and happy.
- the number of selections corresponding to carnival is 50, and the number of selections corresponding to happy is 40. It is determined that the currently logged-in user's happy emotional intensity is carnival. Then the selected number is 50, and the currently logged-in user needs to select 50 content from the content that is not of interest.
- the emotional type of the currently logged-in user when the emotional type of the currently logged-in user is unhappy, it is determined that the selected number of content that the currently logged-in user is not interested in is a preset
- the second value of includes:
- the preset emotional types of unhappiness are divided into despair, grief, sadness, sympathy, and calm.
- the opening and closing angles of the lips corresponding to despair are from a to b
- the opening and closing angles of the lips corresponding to grief are from c to d.
- the opening and closing angles of the corresponding lips are e to f
- the opening and closing angles of the corresponding sympathy lips are g to h
- the opening and closing angles of the lips corresponding to sedation are i to j
- the opening and closing angle of the lips of the currently logged-in user is k
- k belongs to between e and f, so the emotional intensity of the unhappy emotion of the currently logged-in user is sad.
- S1502b Determine the selected number corresponding to the unhappy emotional intensity of the currently logged-in user according to the preset second corresponding relationship between each unhappy emotion intensity and each selected number; the unhappy emotion in the second corresponding relationship
- the relationship between the intensity and the number of selections is inversely proportional, that is, the higher the intensity of the unhappy emotion, the less the number of selections of content that the currently logged-in user is not interested in.
- the preset unhappy emotion types are divided into despair, grief, sadness, sympathy, and sedation.
- the number of choices corresponding to despair is 6, the number of choices corresponding to grief is 5, the number of choices corresponding to sadness is 4, and the number of choices corresponding to sympathy is 4.
- the number of selections is 3, and the number of selections corresponding to sedation is 2. It is determined that the current logged-in user's unhappy emotional intensity is sad, then the selected number is 4, and the current logged-in user needs to select 4 content from uninteresting content.
- the present application can also determine the content to be displayed in combination with the idle state of the currently logged-in user. Therefore, in one embodiment, the selected number of content that the currently logged-in user is not interested in is determined according to the emotion type of the currently logged-in user After that, before selecting a corresponding number of content from the content that the currently logged-in user is not interested in, and displaying the corresponding number of content and the content that the currently logged-in user is interested in, it further includes:
- whether the currently logged-in user is in an idle state can be determined by detecting its walking rate. If the currently logged-in user's walking rate is greater than a preset rate, the currently logged-in user is in a busy state, otherwise the currently logged-in user is in an idle state.
- whether the currently logged-in user is currently idle can be determined by the number of work-related apps running on his terminal. If the number of work-related apps running in the currently logged-in user terminal is greater than the preset number, the currently logged-in user In a busy state, otherwise the currently logged-in user is in an idle state. It can be understood that other methods may also be used to determine whether the currently logged-in user is in an idle state, which is not limited in this application.
- the determined number of selections can be kept unchanged, or the determined number of selections can be further increased, so that the currently logged-in user can view More uninteresting content improves the effectiveness of uninteresting content display.
- the currently logged-in user has a small probability of viewing content that is not of interest when it is not in an idle state, it is necessary to further reduce the number of certain selections at this time to avoid confusion for users and bring users a bad application experience . If the determined selection number is reduced to 0 or a negative number after being reduced by the second preset value, then the currently logged-in user is no longer shown uninterested content at this time.
- the method before acquiring the face image of the currently logged-in user of the application program, the method further includes: The camera device captures the face image of the currently logged-in user, and uses the face image as the user identity to log in to the application; the display of the corresponding number of content and the content of interest to the currently logged-in user includes: The homepage of the program displays a corresponding amount of content and the content that the currently logged-in user is interested in.
- this application also provides an application content display device.
- the specific implementation of the device of this application will be described in detail below with reference to the accompanying drawings.
- FIG. 2 it is a schematic diagram of an application content display device of an embodiment, and the device includes:
- the face image acquisition module 210 is used to acquire the face image of the currently logged-in user of the application;
- the emotion type recognition module 220 is configured to perform micro-expression recognition on the face image of the currently logged-in user, and determine the emotion type of the currently logged-in user;
- the preference determination module 230 is configured to determine the currently logged-in user's preference for each content in the application according to the acquired historical behavior data of the currently logged-in user in the application;
- the content determining module 240 is configured to determine the content that is of interest to the currently logged-in user and the content that is not of interest according to the preference of the currently logged-in user for each content;
- the selected number determining module 250 is configured to determine the selected number of content that the currently logged-in user is not interested in according to the emotion type of the currently logged-in user;
- the display module 260 is configured to select a corresponding number of content from the content that the currently logged-in user is not interested in, and display the corresponding number of content and the content that the currently logged-in user is interested in.
- the emotion type includes happy and unhappy; the selected number determining module 250 includes:
- the first selection number determining module is configured to determine that the selected number of content that the currently logged-in user is not interested in is a preset first value when the emotional type of the currently logged-in user is happy;
- the second selected number determining module is configured to determine that the selected number of content that the currently logged in user is not interested in is a preset second value when the emotion type of the currently logged in user is unhappy; the second value is less than The first value.
- the happy emotion types are divided into different emotion intensities, and the unhappy emotion types are divided into different emotion intensities.
- the first selection number determining module determines the happy emotional intensity of the currently logged-in user; according to the preset first correspondence between the happy emotional intensity and each selected number, the selection corresponding to the happy emotional intensity of the currently logged-in user is determined Number; the emotional intensity of happiness in the first corresponding relationship and the selected number are in a positive proportional relationship.
- the second selected number determining module determines the emotional intensity of the currently logged-in user's unhappy mood; according to the preset second correspondence between each unhappy emotional intensity and each selected number, determines the emotional intensity unhappy with the currently logged-in user Corresponding selected number; the emotional intensity of unhappy in the second corresponding relationship and the selected number are in an inverse proportional relationship.
- the device further includes a selection number adjustment module between the selection number determination module 250 and the display module 260, the selection number adjustment module is used to detect whether the currently logged-in user is in an idle state; In the state, keep the selected number of content that is not of interest to the currently logged-in user unchanged, or increase the selected number of content that is not of interest to the currently logged-in user by a first preset value; when not in an idle state, add The number of selected content that the currently logged-in user is not interested in is reduced by the second preset value.
- the emotion type recognition module 220 includes:
- the feature vector extraction module is used to extract the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead from the face image of the currently logged-in user, and the extracted features constitute the current The feature vector of the logged-in user;
- the emotion type recognition module is used to input the feature vector of the currently logged-in user into a pre-built micro-expression recognition model, and the micro-expression recognition model recognizes the emotion type of the currently logged-in user.
- the device of the present application further includes a micro-expression recognition model construction module, including:
- An image and identification acquisition unit for acquiring a sample face image and an identification of the sample face image; the identification is used to characterize the emotion type of the sample face image;
- the feature vector extraction unit is used to extract the features of lips, cheeks, eyelids, eyes, tails of eyes, eyebrows, chin, nostrils and forehead from the sample face image, and the sample face image is composed of the extracted features Eigenvectors;
- the micro-expression recognition model training unit is used to input the feature vector and identification of the sample face image into the convolutional neural network for training to obtain the micro-expression recognition model.
- the emotion type recognition module 220 includes:
- the feature vector extraction unit is used to extract the features of the corners of the mouth, cheeks, eyelids and the tail of the eyes from the face image of the currently logged-in user, and each extracted feature constitutes the feature vector of the currently logged-in user;
- the similarity calculation unit is used to calculate the similarity between the feature vector of the currently logged-in user and a preset feature vector;
- the preset feature vector includes pre-stored corners of the mouth, cheeks, and eyelids of the currently logged-in user when happy And the characteristics of the tail of the eye;
- the emotion type determining unit is configured to determine that the emotion type of the currently logged-in user is happy when the similarity is greater than a preset threshold; otherwise, determine that the emotion type of the currently logged-in user is unhappy.
- the preference determination module 230 includes:
- the statistical unit is configured to calculate the number of clicks, the number of purchases, and the number of comments made by the currently logged-in user on each content according to the historical behavior data;
- the preference determination unit is configured to respectively multiply the number of clicks, the number of purchases, and the number of comments posted on the same content by their respective weights and then sum them to obtain the currently logged-in user's preference for each content.
- the content display device further includes a login module connected to the face image acquisition module, the login module uses a camera device to take a face image of the currently logged-in user, and uses the face image as the user Log in to the application as an identity; the display module 260 displays a corresponding amount of content and content that is of interest to the currently logged-in user on the home page of the application.
- a login module connected to the face image acquisition module, the login module uses a camera device to take a face image of the currently logged-in user, and uses the face image as the user Log in to the application as an identity; the display module 260 displays a corresponding amount of content and content that is of interest to the currently logged-in user on the home page of the application.
- the embodiments of the present application also provide a computer-readable storage medium.
- the storage medium is a volatile storage medium or a non-volatile storage medium, and a computer program is stored thereon. When the program is executed by a processor, any one of the above is realized.
- the storage medium includes but is not limited to any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM (Random AccesSS Memory), and then Memory), EPROM (EraSable Programmable Read-Only Memory), EEPROM (Electrically EraSable Programmable Read-Only Memory), flash memory, magnetic card or optical card. That is, the storage medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer). It can be a read-only memory, magnetic disk or optical disk, etc.
- An embodiment of the present application also provides a computer device, which includes:
- One or more processors are One or more processors;
- Storage device for storing one or more programs
- the one or more processors implement an application content display method:
- the content display method of the application program includes:
- a corresponding number of content is selected from the content that the currently logged-in user is not interested in, and the corresponding number of content and the content that the currently logged-in user is interested in are displayed.
- FIG. 3 is a schematic diagram of the structure of the computer equipment of this application, including a processor 320, a storage device 330, an input unit 340, a display unit 350 and other devices.
- the storage device 330 may be used to store the application program 310 and various functional modules.
- the processor 320 runs the application program 310 stored in the storage device 330 to execute various functional applications and data processing of the device.
- the storage device 330 may be an internal memory or an external memory, or include both internal memory and external memory.
- the internal memory may include read-only memory, programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or random access memory.
- External storage can include hard disks, floppy disks, ZIP disks, U disks, tapes, etc.
- the storage devices disclosed in this application include but are not limited to these types of storage devices.
- the storage device 330 disclosed in this application is merely an example and not a limitation.
- the input unit 340 is used for receiving input of signals and receiving face images of the currently logged-in user.
- the input unit 340 may include a touch panel and other input devices.
- the touch panel can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc., to operate on the touch panel or near the touch panel), and according to preset
- the program drives the corresponding connection device; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as playback control buttons, switch buttons, etc.), trackball, mouse, and joystick.
- the display unit 350 can be used to display information input by the user or information provided to the user and various menus of the computer device.
- the display unit 350 may take the form of a liquid crystal display, an organic light emitting diode, or the like.
- the processor 320 is the control center of the computer equipment. It uses various interfaces and lines to connect the various parts of the entire computer, runs or executes the software programs and/or modules stored in the storage device 330, and calls data stored in the storage device. , Perform various functions and process data.
- the computer device includes one or more processors 320, one or more storage devices 330, and one or more application programs 310, where the one or more application programs 310 are stored in the storage device 330. It is configured to be executed by the one or more processors 320, and the one or more application programs 310 are configured to execute the content display method of the application program described in the above embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (20)
- 一种应用程序的内容展示方法,其中,包括:获取应用程序的当前登录用户的人脸图像;对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
- 根据权利要求1所述的应用程序的内容展示方法,其中,所述情绪类型包括高兴和不高兴;所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
- 根据权利要求2所述的应用程序的内容展示方法,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
- 根据权利要求1所述的应用程序的内容展示方法,其中,所述根 据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:检测所述当前登录用户是否处于空闲状态;若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
- 根据权利要求1所述的应用程序的内容展示方法,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
- 根据权利要求1所述的应用程序的内容展示方法,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量;计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征;若相似度大于预设阈值,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
- 根据权利要求1至6任意一项所述的应用程序的内容展示方法,其中,所述根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度,包括:根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数;分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
- 一种应用程序的内容展示装置,其中,包括:人脸图像获取模块,用于获取应用程序的当前登录用户的人脸图像;情绪类型识别模块,用于对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;偏好度确定模块,用于根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;内容确定模块,用于根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;选取数目确定模块,用于根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;展示模块,用于从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现一种应用程序的内容展示方法:其中,所述应用程序的内容展示方法包括:获取应用程序的当前登录用户的人脸图像;对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
- 根据权利要求9所述的计算机可读存储介质,其中,所述情绪类型包括高兴和不高兴;所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
- 根据权利要求10所述的计算机可读存储介质,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数 目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
- 根据权利要求9所述的计算机可读存储介质,其中,所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:检测所述当前登录用户是否处于空闲状态;若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
- 根据权利要求9所述的计算机可读存储介质,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
- 根据权利要求9所述的计算机可读存储介质,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:从所述当前登录用户的人脸图像中提取出嘴角、面颊、眼睑和眼睛尾部的特征,由提取的各个特征构成所述当前登录用户的特征向量;计算所述当前登录用户的特征向量与预设特征向量之间的相似度;所述预设特征向量包括预存的所述当前登录用户在高兴时的嘴角、面颊、眼睑和眼睛尾部的特征;若相似度大于预设阈值,确定所述当前登录用户的情绪类型为高兴,否则确定所述当前登录用户的情绪类型为不高兴。
- 根据权利要求9至14任意一项所述的计算机可读存储介质,其中,所述根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度,包括:根据所述历史行为数据,统计出所述当前登录用户对各个内容的点击次数、购买次数以及发表评论次数;分别将同一内容的点击次数、购买次数以及发表评论次数与各自的权重相乘后求和,得到所述当前登录用户对各个内容的偏好度。
- 一种计算机设备,其中,所述计算机设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现一种应用程序的内容展示方法:其中,所述应用程序的内容展示方法包括:获取应用程序的当前登录用户的人脸图像;对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型;根据获取的所述当前登录用户在所述应用程序中的历史行为数据,确定所述当前登录用户对所述应用程序中各个内容的偏好度;根据所述当前登录用户对各个内容的偏好度,确定所述当前登录用户感兴趣的内容和不感兴趣的内容;根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目;从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容。
- 根据权利要求16所述的计算机设备,其中,所述情绪类型包括高兴和不高兴;所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目,包括:当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值;当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第二数值;所述第二数值小于所述第一数值。
- 根据权利要求17所述的计算机设备,其中,所述高兴的情绪类型被划分为不同的情绪强度,所述不高兴的情绪类型被划分为不同的情绪强度;所述当所述当前登录用户的情绪类型为高兴时,确定所述当前登录用户不感兴趣的内容的选取数目为预设的第一数值,包括:确定所述当前登录用户高兴的情绪强度;根据预设的各个高兴的情绪强度与各个选取数目的第一对应关系,确定与所述当前登录用户高兴的情绪强度对应的选取数目;所述第一对应关系中高兴的情绪强度与选取数目为正比例关系;所述当所述当前登录用户的情绪类型为不高兴时,确定所述当前登录 用户不感兴趣的内容的选取数目为预设的第二数值,包括:确定所述当前登录用户不高兴的情绪强度;根据预设的各个不高兴的情绪强度与各个选取数目的第二对应关系,确定与所述当前登录用户不高兴的情绪强度对应的选取数目;所述第二对应关系中不高兴的情绪强度与选取数目为反比例关系。
- 根据权利要求16所述的计算机设备,其中,所述根据所述当前登录用户的情绪类型,确定所述当前登录用户不感兴趣的内容的选取数目之后,所述从所述当前登录用户不感兴趣的内容中选取相应数目的内容,展示相应数目的内容以及所述当前登录用户感兴趣的内容之前,还包括:检测所述当前登录用户是否处于空闲状态;若处于空闲状态,保持所述当前登录用户不感兴趣的内容的选取数目不变,或者将所述当前登录用户不感兴趣的内容的选取数目增加第一预设值;若不处于空闲状态,将所述当前登录用户不感兴趣的内容的选取数目降低第二预设值。
- 根据权利要求16所述的计算机设备,其中,所述对所述当前登录用户的人脸图像进行微表情识别,确定所述当前登录用户的情绪类型,包括:从所述当前登录用户的人脸图像中提取出嘴唇、面颊、眼睑、眼睛、眼睛尾部、眉毛、下巴、鼻孔和额头的特征,由提取的各个特征构成所述当前登录用户的特征向量;将所述当前登录用户的特征向量输入预先构建的微表情识别模型,由所述微表情识别模型识别出所述当前登录用户的情绪类型。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910532695.0 | 2019-06-19 | ||
CN201910532695.0A CN110389662A (zh) | 2019-06-19 | 2019-06-19 | 应用程序的内容展示方法、装置、存储介质和计算机设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253360A1 true WO2020253360A1 (zh) | 2020-12-24 |
Family
ID=68285611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/086149 WO2020253360A1 (zh) | 2019-06-19 | 2020-04-22 | 应用程序的内容展示方法、装置、存储介质和计算机设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110389662A (zh) |
WO (1) | WO2020253360A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117971870A (zh) * | 2024-02-05 | 2024-05-03 | 江苏顶石工业科技有限公司 | 基于动态查询sql创建视图作为配置展现方式的方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110389662A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 应用程序的内容展示方法、装置、存储介质和计算机设备 |
CN113590926A (zh) * | 2020-04-30 | 2021-11-02 | 北京爱笔科技有限公司 | 用户兴趣的识别方法、装置、设备及计算机可读介质 |
CN111738362B (zh) * | 2020-08-03 | 2020-12-01 | 成都睿沿科技有限公司 | 对象识别方法及装置、存储介质及电子设备 |
CN111860454B (zh) * | 2020-08-04 | 2024-02-09 | 北京深醒科技有限公司 | 一种基于人脸识别的模型切换算法 |
CN113014980B (zh) * | 2021-02-23 | 2023-07-18 | 北京字跳网络技术有限公司 | 远程控制方法、装置和电子设备 |
CN113222712A (zh) * | 2021-05-31 | 2021-08-06 | 中国银行股份有限公司 | 一种产品推荐方法和装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108123972A (zh) * | 2016-11-28 | 2018-06-05 | 腾讯科技(北京)有限公司 | 多媒体文件的分配方法及装置 |
CN109559193A (zh) * | 2018-10-26 | 2019-04-02 | 深圳壹账通智能科技有限公司 | 智能识别的产品推送方法、装置、计算机设备及介质 |
CN109785045A (zh) * | 2018-12-14 | 2019-05-21 | 深圳壹账通智能科技有限公司 | 一种基于用户行为数据的推送方法和装置 |
CN110389662A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 应用程序的内容展示方法、装置、存储介质和计算机设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343089A (zh) * | 2017-06-20 | 2017-11-10 | 广东欧珀移动通信有限公司 | 信息推送方法及相关产品 |
CN108255307A (zh) * | 2018-02-08 | 2018-07-06 | 竹间智能科技(上海)有限公司 | 基于多模态情绪与脸部属性识别的人机交互方法、系统 |
CN109559208B (zh) * | 2019-01-04 | 2022-05-03 | 平安科技(深圳)有限公司 | 一种信息推荐方法、服务器及计算机可读介质 |
-
2019
- 2019-06-19 CN CN201910532695.0A patent/CN110389662A/zh active Pending
-
2020
- 2020-04-22 WO PCT/CN2020/086149 patent/WO2020253360A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108123972A (zh) * | 2016-11-28 | 2018-06-05 | 腾讯科技(北京)有限公司 | 多媒体文件的分配方法及装置 |
CN109559193A (zh) * | 2018-10-26 | 2019-04-02 | 深圳壹账通智能科技有限公司 | 智能识别的产品推送方法、装置、计算机设备及介质 |
CN109785045A (zh) * | 2018-12-14 | 2019-05-21 | 深圳壹账通智能科技有限公司 | 一种基于用户行为数据的推送方法和装置 |
CN110389662A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 应用程序的内容展示方法、装置、存储介质和计算机设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117971870A (zh) * | 2024-02-05 | 2024-05-03 | 江苏顶石工业科技有限公司 | 基于动态查询sql创建视图作为配置展现方式的方法 |
Also Published As
Publication number | Publication date |
---|---|
CN110389662A (zh) | 2019-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253360A1 (zh) | 应用程序的内容展示方法、装置、存储介质和计算机设备 | |
WO2020238023A1 (zh) | 信息推荐方法、装置、终端及存储介质 | |
US11887352B2 (en) | Live streaming analytics within a shared digital environment | |
US11393133B2 (en) | Emoji manipulation using machine learning | |
US10019653B2 (en) | Method and system for predicting personality traits, capabilities and suggested interactions from images of a person | |
WO2020253372A1 (zh) | 基于大数据分析的信息推送方法、装置、设备及存储介质 | |
US11056225B2 (en) | Analytics for livestreaming based on image analysis within a shared digital environment | |
WO2020143156A1 (zh) | 热点视频标注处理方法、装置、计算机设备及存储介质 | |
US10289898B2 (en) | Video recommendation via affect | |
US20170109571A1 (en) | Image analysis using sub-sectional component evaluation to augment classifier usage | |
US10474875B2 (en) | Image analysis using a semiconductor processor for facial evaluation | |
WO2018033154A1 (zh) | 手势控制方法、装置和电子设备 | |
US20170098122A1 (en) | Analysis of image content with associated manipulation of expression presentation | |
CN109996091A (zh) | 生成视频封面的方法、装置、电子设备和计算机可读存储介质 | |
US20170095192A1 (en) | Mental state analysis using web servers | |
TW202044066A (zh) | 計算人類使用者的真實性的運算系統與方法以及決定貸款申請者的真實性的方法 | |
US20140016860A1 (en) | Facial analysis to detect asymmetric expressions | |
JP2019530041A (ja) | 検索クエリに基づいたソース画像の顔とターゲット画像との結合 | |
US11430561B2 (en) | Remote computing analysis for cognitive state data metrics | |
JP2013114689A (ja) | 対話型広告のための使用測定技法およびシステム | |
US11151385B2 (en) | System and method for detecting deception in an audio-video response of a user | |
US20150186912A1 (en) | Analysis in response to mental state expression requests | |
Wu et al. | Understanding and modeling user-perceived brand personality from mobile application uis | |
US11762900B2 (en) | Customized selection of video thumbnails to present on social media webpages | |
CN110046955A (zh) | 基于人脸识别的营销方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20826787 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20826787 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 29/03/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20826787 Country of ref document: EP Kind code of ref document: A1 |