CN111433761A - Application program content pushing method and pushing device for intelligent equipment - Google Patents

Application program content pushing method and pushing device for intelligent equipment Download PDF

Info

Publication number
CN111433761A
CN111433761A CN201780094620.8A CN201780094620A CN111433761A CN 111433761 A CN111433761 A CN 111433761A CN 201780094620 A CN201780094620 A CN 201780094620A CN 111433761 A CN111433761 A CN 111433761A
Authority
CN
China
Prior art keywords
user
image
database
information
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780094620.8A
Other languages
Chinese (zh)
Inventor
顾海元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Communication Co Ltd
Original Assignee
Shenzhen Transsion Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Communication Co Ltd filed Critical Shenzhen Transsion Communication Co Ltd
Publication of CN111433761A publication Critical patent/CN111433761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A pushing method and a pushing device for application program content of intelligent equipment are characterized in that the pushing method comprises the following steps: constructing a database (101) which stores corresponding relations between preset expression characteristics and recommended contents of at least one group of emotion information and application programs; shooting an image of a user, and acquiring a current expression feature (102) in the image; comparing the current expression features with the preset expression features, and judging the current emotion information of the user (103); acquiring recommended content (104) of the application program corresponding to the current emotion information in the database; displaying the recommended content (105) of the application in the smart device. The method and the device can automatically and accurately recommend the music which is consistent with the current emotional state of the user to the user without manual search of the user, so that the application is more intelligent, and the user experience is higher.

Description

Application program content pushing method and pushing device for intelligent equipment Technical Field
The present invention relates to the field of intelligent devices, and in particular, to a method and an apparatus for pushing application content for an intelligent device.
Background
Subjective emotion plays an indispensable important role in all aspects of life and work of people, and nowadays, more people are influenced by negative emotions to different degrees under the condition of increasing life pressure, particularly, people who work specially, such as working personnel engaged in aerospace, military, psychologists, customer service and the like, have single and boring working environment for a long time and are easy to generate negative emotion, and the negative emotion can not only reduce working efficiency, but also can cause adverse effects on physical and mental health, so that the research on automatic emotion recognition is particularly important and has practical significance.
Disclosure of Invention
In order to overcome the technical defects, the present invention provides a method and an apparatus for pushing application content for an intelligent device.
The invention discloses a pushing method for application program content of intelligent equipment, which is characterized by comprising the following steps:
constructing a database storing corresponding relations between preset expression characteristics and recommended contents of at least one group of emotion information and application programs;
shooting an image of a user, and acquiring a current expression characteristic in the image;
comparing the current expression characteristics with the preset expression characteristics, and judging the current emotion information of the user;
acquiring recommended content of the application program corresponding to the current emotion information in the database;
and displaying the recommended content of the application program in the intelligent equipment.
Preferably, the constructing a database storing the corresponding relationship between the preset expressive features and at least one group of emotion information and the recommended content of the application program includes:
the database is constructed in a server;
establishing communication connection between the intelligent equipment and the server;
and acquiring and loading the database in the server.
Preferably, capturing an image of a user, and acquiring a current expression feature in the image, includes:
detecting position information of the face of the user in the image;
determining a partial image of the facial image of the user according to the position information;
and extracting emotional characteristic information of the user from the local image.
Preferably, the obtaining of the recommended content of the application program corresponding to the current emotion information in the database includes:
the recommended content of the application program comprises at least one of a mobile phone theme, a screen saver and music;
and when the emotion information is contained in the database, starting the application program in the intelligent equipment and prompting the recommended content of the application program.
Preferably, the displaying the recommended content of the application program in the smart device further includes:
displaying a judgment interface for requesting the user to judge whether the recommended content is adopted;
receiving a confirmation action of a user;
and applying the recommended content or reducing to an original state according to the confirmation action.
The invention also discloses a pushing device for the application program content of the intelligent equipment, which is characterized by comprising the following components: a storage module, a shooting module, a matching module, an acquisition module and a pushing module, wherein,
the storage module is used for constructing a database which stores corresponding relations between preset expression characteristics and at least one group of emotion information and recommended contents of the application program;
the shooting module is used for shooting an image of a user and acquiring the current expression characteristics in the image;
the matching module is in communication connection with the shooting module and is used for comparing the current expression characteristics with the preset expression characteristics and judging the current emotion information of the user;
the acquisition module is in communication connection with the matching module and is used for acquiring the recommended content of the application program corresponding to the current emotion information from the database;
the pushing module is in communication connection with the obtaining module and is used for displaying the recommended content of the application program in the intelligent equipment.
Preferably, the storage module includes:
the database unit is used for constructing the database in a server;
the communication unit is in communication connection with the server and is used for establishing communication connection between the intelligent equipment and the server;
and the loading unit is in communication connection with the communication unit and is used for acquiring and loading the database in the server through the communication unit.
Preferably, the photographing module includes:
an image acquisition unit configured to detect position information of a face of the user in the image;
the image position unit is in communication connection with the image acquisition unit and is used for determining a local image of the facial image of the user according to the position information;
and the emotion extraction unit is in communication connection with the image position unit and is used for extracting the emotion characteristic information of the user from the local image.
Preferably, the obtaining module includes:
the recommended content of the application program comprises at least one of a mobile phone theme, a screen saver and music;
and the acquisition unit is used for starting the application program in the intelligent equipment and prompting the recommended content of the application program when the emotion information is contained in the database.
Preferably, the pushing module includes:
the display unit is used for displaying a judgment interface for requesting the user to judge whether the recommended content is adopted;
the receiving unit is in communication connection with the display unit and is used for receiving the confirmation action of the user;
and the execution unit is in communication connection with the receiving unit and is used for applying the recommended content or restoring the recommended content to an original state according to the confirmation action.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. according to the analysis result, the current emotional state of the user can be acquired, then the music corresponding to the current emotional state of the user is automatically recommended to the user, and the emotion of the user is adjusted, so that the music which is consistent with the current emotional state of the user can be automatically and accurately recommended to the user without manual searching of the user, the application is more intelligent, and the user experience is higher;
2. the method comprises the steps that facial information of a user is obtained through a camera, the emotion of the user is identified and determined according to the obtained facial information, and then a desktop theme corresponding to the emotion of the user is recommended according to the determined emotion of the user; the method and the system enable the user to quickly select the desktop theme suitable for the user, greatly shorten the time for the user to select the theme, and improve the use experience of the user.
Drawings
Fig. 1 is a flow chart illustrating a method for pushing application content for a smart device according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustrating step 101 of a push method according to a preferred embodiment of the present invention;
FIG. 3 is a flow chart illustrating step 102 of the push method in accordance with a preferred embodiment of the present invention;
FIG. 4 is a flow chart illustrating step 104 of the push method in accordance with a preferred embodiment of the present invention;
FIG. 5 is a flow chart illustrating step 104 of the push method in accordance with a preferred embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for pushing application content for a smart device according to a preferred embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a module of a pushing device according to a preferred embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a module of the pushing device in accordance with a preferred embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a module of the pushing device in accordance with a preferred embodiment of the present invention.
Fig. 10 is a schematic structural diagram of a module of the pushing device in accordance with a preferred embodiment of the present invention.
Reference numerals:
10-pushing device, 20-storage module, 30-shooting module, 40-matching module, 50-obtaining module and 60-pushing module:
Detailed Description
the advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
As used herein, a smart device, as will be understood by those skilled in the art, includes both devices having wireless signal receivers, devices having only wireless signal receivers without transmit capability, and devices having receive and transmit hardware, devices having receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "device" or "smart device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, and/or in any other location(s) on earth and/or in space. As used herein, a "Device" or "smart Device" may also be a communication Device, a web Device, a music/video playing Device, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart television, a set-top box, etc. And the smart device, the smart terminal, the mobile terminal, and the mobile device described in this specification are all equivalent.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "when or" responsive to a determination, "depending on the context.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "when or" responsive to a determination, "depending on the context.
Referring to fig. 1-5, schematic flow diagrams according to a preferred embodiment of the present invention are shown:
constructing a database for storing the corresponding relation between at least one group of emotion characteristic information and the recommendation information;
receiving an expression image of a user pointing to the intelligent equipment, and extracting emotional characteristic information contained in the photo;
acquiring the recommendation information corresponding to the emotion characteristic information in the database;
displaying the recommendation information in the intelligent device;
in one embodiment of the invention, wherein the database is built in a server; establishing communication connection between the intelligent equipment and the server; acquiring and loading the database in the server, wherein the method comprises the following steps:
the recommendation method further comprises the following steps: and inputting the expression image, and correlating the input at least one expression image with the recommended application information to form a relation between the preset emotional characteristic and the recommended information. And the expression image recognizer is used for acquiring, associating and pre-storing corresponding equipment applications matched with the expression images, and the expression images are stored in the equipment to form a database. The expression image recognizer can be a camera of the intelligent device, a camera connected with the intelligent device, or a camera module contained in the intelligent device, and the like, and can adopt a relatively direct storage method (by using the relation between the input expression image or a standard template of the expression image and the recommendation information as a storage format of data in the database) based on the stored database, so that a user can associate, pre-record and modify the relation between the recommendation information and the expression image, and the operation is convenient.
When the currently acquired emotional characteristic information does not exist in the database, namely a relationship between the emotional characteristic information and the recommendation information does not exist in the database, the intelligent device prompts the user whether to establish a corresponding relationship between the currently acquired emotional characteristic information and the recommendation information selected and designated by the user, and then stores the corresponding relationship in the database, and after the user confirms the relationship, the corresponding relationship is stored in the database.
The database can be located in the memory of the smart device or in a remote server, and when the database is located in the remote server, the smart device can read the data stored in the server through a remote connection.
The facial image data of the user mainly includes data of several regions that are most obvious when the user is suffering from facial expressions and are easy to analyze, such as eyebrow form data, eye form data, mouth form data, and the like of the user.
In practical application, the facial image data of the user can be obtained from facial photos or video images of the user, taking the video images as an example, when the emotion of the user is recognized, a camera of the intelligent device is started to carry out video recording, after the video recording of the user is obtained, a plurality of frames of static images are randomly extracted from the video recording, the images comprise image frames of the facial images of the user, and the facial images of the user in the image frames are recognized and scanned, so that the facial image data of the user are extracted. For the scanning of the user face image in the image frame, the user face image can be divided into a plurality of non-overlapping areas, and the non-overlapping scanning is carried out on each area to obtain the user face image data. In addition, when the method of obtaining the face image data of the user from the face image of the user is similar to the above-described method of obtaining the face image data from the image frame.
The specific judgment method may be that a face image is firstly acquired, then the image of the mouth and the image of the eyes of the user are divided and acquired, and when the degree of downward bending and the degree of frowning of the corner of the mouth of the user are larger, the probability that the user is in the negative emotion is larger, so that the result that the user is in the negative emotion can be identified with higher probability. The emotion is not uniquely identified by the corners of the mouth and eyes of the user, but can be well determined from other image data of the face of the user.
Furthermore, in order to enable emotion recognition to be more consistent with personalized specificity of the user, a database which belongs to the user and accords with the user personalization can be established in a mode of acquiring facial image data of the user in advance. The database stores: and analyzing the obtained face image data corresponding to the positive emotion and the face image data corresponding to the negative emotion according to the user face image data obtained by history. When the emotion recognition of the user is determined, the acquired facial image data of the user can be directly compared with corresponding data in the database respectively to obtain a more accurate recognition result.
In practical application, after obtaining the emotion recognition results corresponding to various types of feature data and the accuracy corresponding to the emotion results, optionally, the emotion recognition result with the highest accuracy can be obtained from the emotion recognition results as the current emotion of the user in a mode of comparing the accuracy corresponding to the emotion recognition results. Therefore, the purposes of reducing the false recognition rate and improving the emotion recognition accuracy rate are achieved.
Preferably, the emotion feature information in the facial image data of the user is acquired, emotion recognition is performed based on the acquired feature information, and the corresponding emotion information of the at least two users is acquired, so that the emotion of the user is determined according to the acquired emotion information of the at least two users, and accurate emotion recognition of the user is achieved.
Optionally, when the user uses an intelligent device, such as a mobile phone, the emotion may change from time to time, generally according to the operation habit of each user, for example, after the user frequently inputs wrong information while inputting information, the negative emotion is raised; when the screen is clicked, negative emotion is raised after the fact that the corresponding speed of the intelligent device is very low is found, and the emotion of the user can be disclosed when the user communicates with other people at the same time. When the emotion of the user changes, the speeds correspondingly change, for example, when the emotion of the user is excited, the speed of inputting information and the speed of clicking a screen become faster, and words related to emotion excitation are likely to appear in communication information communicated with other people; conversely, when the user's mood is calm, these speeds will remain within a certain range of values; when the emotion of the user is low, the speeds generally become smaller, words related to the low emotion are likely to appear in the communication information communicated with other people, and therefore, the emotional state of the user can be known by analyzing the characteristic data.
Preferably, the facial expression in the image can be used as input by using an Emotion API, the confidence of a group of emotions of each Face in the image and the bounding box of the Face are returned by using a Face API, and finally an Emotion analysis graph is obtained, wherein the Emotion analysis graph comprises: anger, calm, disgust, fear, happiness, calm, sadness, surprise, etc.), and the final emotional characteristic value is determined according to the numerical value of each type of emotion in the emotion analysis chart.
The recommendation information comprises at least one of a mobile phone theme, a screen saver and music; and when the emotional characteristic information is contained in the database, displaying the recommendation information in the intelligent equipment.
The first embodiment is as follows:
in this embodiment, the corresponding desktop theme downloaded to the terminal is recommended according to the emotional characteristic information of the user, or the corresponding desktop theme is recommended by downloading the desktop theme online according to the emotional characteristic information of the user. When the user downloads a large number of desktop themes to the terminal in advance, the step can quickly screen and recommend the desktop themes suitable for the user from the downloaded desktop themes. The recommendation mode is that the recommendation is carried out in a push message mode of the notification bar or the status bar, and a user can jump to a corresponding desktop theme display page after clicking the push message of the notification bar or the status bar. The desktop theme presentation page presents a desktop theme related to the emotional characteristic information.
In order to achieve a more accurate recommendation effect, some user information, such as age, occupation, etc., may be collected, and the recommendation may be performed according to the information.
In order to achieve a better use effect, the step of recommending the desktop theme corresponding to the emotional characteristic information of the user according to the determined emotional characteristic information of the user comprises the following steps: downloading desktop theme information corresponding to the emotional characteristic information of the user in the recommendation according to the emotional characteristic information of the user; and displaying the desktop theme information through a menu bar or a status bar. The user needs to click on the desktop theme information of the menu bar or the status bar to further expand the specific content. The contents displayed by the menu bar or the status bar comprise characters and links. The operation can enable the user to easily browse the related information of the recommended desktop theme, save resources and realize simple and convenient processes.
Preferably, after the desktop theme information is displayed through the menu bar or the status bar, the desktop theme recommendation method further includes: if an access request of a user is acquired within a preset time period, an online interface containing the recommended desktop theme preview image is accessed, and the online interface displays the desktop theme preview image corresponding to the displayed desktop theme information; and if the access request of the user is not acquired or the abandon request of the user is acquired within a preset time period, the displayed desktop theme information is cancelled.
Specifically, the text information of the desktop theme is displayed on the menu bar or the status bar, and the text information is accompanied with a desktop theme link stored on the server. When the user clicks the text link, an access request is sent out, and after the user agrees with the text link, the user accesses to an online interface of the desktop theme preview image with detailed information. Displaying more desktop theme preview pictures related to the theme or similar to the theme on the online interface; such as: for a happy mood, sports themes may be recommended, colors more prone to dark, icons prone to business, or some NBA-like type theme; for sad moods, a shopping-like theme may be recommended, with colors biased toward bright colors, more vivid icons, etc., or some more star or korean-type themes. The displayed desktop theme is available for the user to view and select. For the recommended subjects, the display can be further classified according to different types of automobiles, beauty, plants, weather and the like. In order to reduce power consumption, when an access request of a user is not acquired or a abandon request of the user is acquired within a preset time period, the desktop theme information displayed this time is automatically cancelled and the desktop is returned to the desktop.
Furthermore, after the online interface containing the recommended desktop theme preview image is entered, the user can select a favorite desktop theme and click a relevant button of the desktop theme, at this time, a selection box is popped up for the user to select, and the user can select to apply or not apply the selected desktop theme. When the user selects the application, automatically downloading the desktop application and automatically installing the desktop application; and when the user selects to cancel, canceling the recommended desktop theme information and returning to the original state of the desktop application program.
The user can have more choices by entering the online interface, and the problem of high memory resource occupancy rate of the terminal caused by downloading excessive desktop theme information is avoided, so that the aim of saving hardware resources is fulfilled.
Example two:
if the current emotional state of the user is calm, some mixed songs, namely music of various styles, are recommended to the user, and the mood of the user is calm at the moment and the user is suitable for enjoying music of various types of songs; if the current emotional state of the user is happy, some happy songs are recommended to the user, the happy mood of the user is met, and the mood of people is more comfortable; if the current emotional state of the user is sad, some slow and soft music is recommended to the user, so that the emotional feeling of the user at the moment can be pacified and cured; if the current emotional state of the user is angry, some rock songs in the abreaction emotion class and some calm songs are recommended for the user, so that the emotion of the user can be subdued; if the current emotional state of the user is excited, recommending a fast and dynamic song for the user, and conforming to the mood of the user at the moment; if the current emotion of the user is calm, some music with lyrics being more inspired and relaxed is recommended for the user, so that the emotion of the user is relaxed to be calm.
And recommending the corresponding music downloaded to the terminal according to the emotional characteristic information of the user, or recommending the corresponding music by downloading the corresponding music on line according to the emotional characteristic information of the user. When the user downloads a large amount of music to the terminal in advance, the step can quickly filter and recommend the music of the proper user from the downloaded music. The recommendation mode is to recommend the music through a push message form of the notification bar or the status bar, and the user can jump to a corresponding music playing page after clicking the push message of the notification bar or the status bar. The music page shows music related to emotional characteristic information.
In order to achieve better use effect, the step of recommending music corresponding to the emotional characteristic information of the user according to the determined emotional characteristic information of the user comprises the following steps: downloading music corresponding to the emotional characteristic information of the user in the recommendation according to the emotional characteristic information of the user; the music is presented through a menu bar or a status bar. The user needs to click on the music in the menu bar or status bar to further expand the specific content. The contents displayed by the menu bar or the status bar comprise characters and links. The operation can enable the user to easily browse the related information of the recommended music, save resources and realize simple and convenient flow.
Furthermore, after entering the online interface containing the recommended music preview image, the user can select favorite music and click the relevant button of the music, at this time, a selection box is popped up for the user to select, and the user can select to apply or not apply the selected music. And when the user selects to cancel, canceling the recommended music information, and returning to the original state of the music application program. .
The user can have more choices by entering the online interface, and the problem of high memory resource occupancy rate of the terminal caused by downloading excessive music is avoided, so that the aim of saving hardware resources is fulfilled.
Example three:
recommending the corresponding wallpaper downloaded to the terminal according to the emotional characteristic information of the user, or recommending the corresponding wallpaper by downloading the corresponding wallpaper online according to the emotional characteristic information of the user, and acquiring the screen resolution of the intelligent device under the condition of receiving a wallpaper recommendation request sent by the intelligent device. When the user downloads a large amount of wallpapers to the terminal in advance, the step can quickly screen and recommend the wallpapers suitable for the user from the downloaded wallpapers. The recommendation mode is to recommend through a push message mode of the notification bar or the status bar, and a user can jump to a corresponding wallpaper playing page after clicking the push message of the notification bar or the status bar. The wallpaper page shows wallpaper related to the emotional characteristic information.
In order to achieve a better use effect, the step of recommending the wallpaper corresponding to the emotion characteristic information of the user according to the determined emotion characteristic information of the user comprises the following steps: downloading wallpaper corresponding to the emotional characteristic information of the user in the recommendation according to the emotional characteristic information of the user; and displaying the wallpaper through a menu bar or a status bar. The user needs to click on the menu bar or status bar wallpaper to further expand the specific content. The contents displayed by the menu bar or the status bar comprise characters and links. The operation can enable the user to easily browse the related information of the recommended wallpaper, save resources and realize simple and convenient processes.
Furthermore, after the online interface containing the recommended wallpaper preview image is entered, the user can select the favorite wallpaper and click the related button of the wallpaper, a selection box pops up for the user to select, and the user can select to apply or not apply the selected wallpaper. When the user selects the application, automatically downloading the desktop application and automatically installing the desktop application; and when the user selects to cancel, canceling the recommended wallpaper, and returning to the original state of the wallpaper.
The user can have more choices by entering the online interface, and the problem of high memory resource occupancy rate of the terminal caused by downloading excessive wallpaper is avoided, so that the aim of saving hardware resources is fulfilled.
A smart device implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 6-10, the device according to the present invention can be realized by a pushing device 10, wherein the pushing device 10 comprises: the storage module 20, the shooting module 30, the matching module 40, the obtaining module 50, and the pushing module 60:
the storage module 20:
in one embodiment of the invention, wherein the database is built in a server; establishing communication connection between the intelligent equipment and the server; acquiring and loading the database in the server, wherein the method comprises the following steps:
the database unit provides: and inputting the expression image, and correlating the input at least one expression image with the recommended application information to form a relation between the preset emotional characteristic and the recommended information. And the expression image recognizer is used for acquiring, associating and pre-storing corresponding equipment applications matched with the expression images, and the expression images are stored in the equipment to form a database. The expression image recognizer can be a camera of the intelligent device, a camera connected with the intelligent device, or a camera module contained in the intelligent device, and the like, and can adopt a relatively direct storage method (by using the relation between the input expression image or a standard template of the expression image and the recommendation information as a storage format of data in the database) based on the stored database, so that a user can associate, pre-record and modify the relation between the recommendation information and the expression image, and the operation is convenient.
When the currently acquired emotional characteristic information does not exist in the database, namely a relationship between the emotional characteristic information and the recommendation information does not exist in the database, the intelligent device prompts the user whether to establish a corresponding relationship between the currently acquired emotional characteristic information and the recommendation information selected and designated by the user, and then stores the corresponding relationship in the database, and after the user confirms the relationship, the corresponding relationship is stored in the database.
The database can be located in the memory of the intelligent device or in a remote server, and when the database is located in the remote server, the intelligent device can be remotely connected with the data stored in the server through the loading unit through the communication unit.
The shooting module 30:
the matching module 40:
the facial image data of the user mainly includes data of several regions that are most obvious when the user is suffering from facial expressions and are easy to analyze, such as eyebrow form data, eye form data, mouth form data, and the like of the user.
In practical application, the image obtaining unit obtains facial image data of a user, specifically, the facial image data can be obtained from a facial photo or a video image of the user, taking the video image as an example, when emotion of the user is recognized, a camera of the intelligent device is started to carry out video recording, after the video recording of the user is obtained, a plurality of frames of still images are randomly extracted from the video recording, the images comprise image frames of the facial images of the user, and the image position unit recognizes and scans the facial images of the user in the image frames, so that the facial image data of the user is extracted. For the scanning of the user face image in the image frame, the user face image can be divided into a plurality of non-overlapping areas, and the non-overlapping scanning is carried out on each area to obtain the user face image data. In addition, when the method of obtaining the face image data of the user from the face image of the user is similar to the above-described method of obtaining the face image data from the image frame.
The emotion extraction unit may use a specific judgment method that first obtains a face image, and then obtains an image of the mouth and an image of the eyes of the user by division, and when the degree of downward bending and the degree of frowning of the corner of the mouth of the user are higher, the probability that the user is in a negative emotion is higher, and a result that the user is in a negative emotion can be recognized with a higher probability. The emotion is not uniquely identified by the corners of the mouth and eyes of the user, but can be well determined from other image data of the face of the user.
Furthermore, in order to enable emotion recognition to be more consistent with personalized specificity of the user, a database which belongs to the user and accords with the user personalization can be established in a mode of acquiring facial image data of the user in advance. The database stores: and analyzing the obtained face image data corresponding to the positive emotion and the face image data corresponding to the negative emotion according to the user face image data obtained by history. When the emotion recognition of the user is determined, the acquired facial image data of the user can be directly compared with corresponding data in the database respectively to obtain a more accurate recognition result.
In practical application, after obtaining the emotion recognition results corresponding to various types of feature data and the accuracy corresponding to the emotion results, optionally, the emotion recognition result with the highest accuracy can be obtained from the emotion recognition results as the current emotion of the user in a mode of comparing the accuracy corresponding to the emotion recognition results. Therefore, the purposes of reducing the false recognition rate and improving the emotion recognition accuracy rate are achieved.
Preferably, the emotion feature information in the facial image data of the user is acquired, emotion recognition is performed based on the acquired feature information, and the corresponding emotion information of the at least two users is acquired, so that the emotion of the user is determined according to the acquired emotion information of the at least two users, and accurate emotion recognition of the user is achieved.
Optionally, when the user uses an intelligent device, such as a mobile phone, the emotion may change from time to time, generally according to the operation habit of each user, for example, after the user frequently inputs wrong information while inputting information, the negative emotion is raised; when the screen is clicked, negative emotion is raised after the fact that the corresponding speed of the intelligent device is very low is found, and the emotion of the user can be disclosed when the user communicates with other people at the same time. When the emotion of the user changes, the speeds correspondingly change, for example, when the emotion of the user is excited, the speed of inputting information and the speed of clicking a screen become faster, and words related to emotion excitation are likely to appear in communication information communicated with other people; conversely, when the user's mood is calm, these speeds will remain within a certain range of values; when the emotion of the user is low, the speeds generally become smaller, words related to the low emotion are likely to appear in the communication information communicated with other people, and therefore, the emotional state of the user can be known by analyzing the characteristic data.
Preferably, the facial expression in the image can be used as input by using an Emotion API, the confidence of a group of emotions of each Face in the image and the bounding box of the Face are returned by using a Face API, and finally an Emotion analysis graph is obtained, wherein the Emotion analysis graph comprises: anger, calm, disgust, fear, happiness, calm, sadness, surprise, etc.), and the final emotional characteristic value is determined according to the numerical value of each type of emotion in the emotion analysis chart.
The recommendation information comprises at least one of a mobile phone theme, a screen saver and music; and when the emotional characteristic information is contained in the database, displaying the recommendation information in the intelligent equipment.
The acquisition module 50:
the obtaining unit obtains the recommendation information in the database constructed by the storage module 20 based on the emotion feature information of the current user matched by the matching module 40 from the shooting module 30.
The pushing module 60:
the recommendation information comprises at least one of a mobile phone theme, a screen saver and music; when the emotional characteristic information is contained in the database, the display unit displays the recommendation information in the intelligent device.
Furthermore, after the display unit displays an online interface of the preview image related to the recommendation information, the user can select favorite recommendation information and click a related button of the recommendation information, a selection box pops up for the user to select, and the receiving unit provides the recommendation information which can be selected by the user to apply or not apply. When the user selects the application, the execution unit automatically downloads the recommended information application and automatically loads the recommended information application; when the user selects to cancel, the execution unit cancels the recommended information recommended at this time and returns to the original state.
The user can have more choices by entering the online interface, the problem of high memory resource occupancy rate of the terminal caused by downloading excessive wallpaper is avoided, and the purpose of saving hardware resources is achieved
In addition, an embodiment of the present invention further provides a computer-readable storage medium, in which computer-executable instructions are stored, where the computer-readable storage medium is, for example, a non-volatile memory such as an optical disc, a hard disc, or a flash memory. The computer-executable instructions are used for causing a computer or a similar computing device to perform various operations in the recommendation method for application information.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., a computer) readable medium, including but not limited to any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (10)

  1. A pushing method for application program content of an intelligent device is characterized by comprising the following steps:
    constructing a database storing corresponding relations between preset expression characteristics and recommended contents of at least one group of emotion information and application programs;
    shooting an image of a user, and acquiring a current expression characteristic in the image;
    comparing the current expression characteristics with the preset expression characteristics, and judging the current emotion information of the user;
    acquiring recommended content of the application program corresponding to the current emotion information in the database;
    and displaying the recommended content of the application program in the intelligent equipment.
  2. The method as claimed in claim 1, wherein constructing a database storing the corresponding relationship between the preset expressive features and at least one group of emotional information and the recommended content of the application program comprises:
    the database is constructed in a server;
    establishing communication connection between the intelligent equipment and the server;
    and acquiring and loading the database in the server.
  3. The push method of claim 1, wherein capturing an image of a user, obtaining a current expressive feature within the image, comprises:
    detecting position information of the face of the user in the image;
    determining a partial image of the facial image of the user according to the position information;
    and extracting emotional characteristic information of the user from the local image.
  4. The pushing method according to claim 1, wherein obtaining the recommended content of the application program corresponding to the current emotion information in the database includes:
    the recommended content of the application program comprises at least one of a mobile phone theme, a screen saver and music;
    and when the emotion information is contained in the database, starting the application program in the intelligent equipment and prompting the recommended content of the application program.
  5. The push method of claim 1, wherein displaying the recommended content for the application in the smart device, further comprises:
    displaying a judgment interface for requesting the user to judge whether the recommended content is adopted;
    receiving a confirmation action of a user;
    and applying the recommended content or reducing to an original state according to the confirmation action.
  6. A pushing apparatus for application content of a smart device, the pushing apparatus comprising: a storage module, a shooting module, a matching module, an acquisition module and a pushing module, wherein,
    the storage module is used for constructing a database which stores corresponding relations between preset expression characteristics and at least one group of emotion information and recommended contents of the application program;
    the shooting module is used for shooting an image of a user and acquiring the current expression characteristics in the image;
    the matching module is in communication connection with the shooting module and is used for comparing the current expression characteristics with the preset expression characteristics and judging the current emotion information of the user;
    the acquisition module is in communication connection with the matching module and is used for acquiring the recommended content of the application program corresponding to the current emotion information from the database;
    the pushing module is in communication connection with the obtaining module and is used for displaying the recommended content of the application program in the intelligent equipment.
  7. The pushing device of claim 6, wherein the storage module comprises:
    the database unit is used for constructing the database in a server;
    the communication unit is in communication connection with the server and is used for establishing communication connection between the intelligent equipment and the server;
    and the loading unit is in communication connection with the communication unit and is used for acquiring and loading the database in the server through the communication unit.
  8. The pushing device of claim 6, wherein the photographing module comprises:
    an image acquisition unit configured to detect position information of a face of the user in the image;
    the image position unit is in communication connection with the image acquisition unit and is used for determining a local image of the facial image of the user according to the position information;
    and the emotion extraction unit is in communication connection with the image position unit and is used for extracting the emotion characteristic information of the user from the local image.
  9. The pushing device of claim 6, wherein the obtaining module comprises:
    the recommended content of the application program comprises at least one of a mobile phone theme, a screen saver and music;
    and the acquisition unit is used for starting the application program in the intelligent equipment and prompting the recommended content of the application program when the emotion information is contained in the database.
  10. The pushing device of claim 6, wherein the pushing module comprises:
    the display unit is used for displaying a judgment interface for requesting the user to judge whether the recommended content is adopted;
    the receiving unit is in communication connection with the display unit and is used for receiving the confirmation action of the user;
    and the execution unit is in communication connection with the receiving unit and is used for applying the recommended content or restoring the recommended content to an original state according to the confirmation action.
CN201780094620.8A 2017-08-02 2017-08-02 Application program content pushing method and pushing device for intelligent equipment Pending CN111433761A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/095659 WO2019024003A1 (en) 2017-08-02 2017-08-02 Method and apparatus for pushing content of application program to smart device

Publications (1)

Publication Number Publication Date
CN111433761A true CN111433761A (en) 2020-07-17

Family

ID=65232098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780094620.8A Pending CN111433761A (en) 2017-08-02 2017-08-02 Application program content pushing method and pushing device for intelligent equipment

Country Status (2)

Country Link
CN (1) CN111433761A (en)
WO (1) WO2019024003A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883280A (en) * 2021-03-25 2021-06-01 贵阳货车帮科技有限公司 Processing system and method for user recommended content

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716702A (en) * 2013-12-17 2014-04-09 三星电子(中国)研发中心 Television program recommendation device and method
CN104298682A (en) * 2013-07-18 2015-01-21 广州华久信息科技有限公司 Information recommendation effect evaluation method and mobile phone based on facial expression images
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN105163139A (en) * 2014-05-28 2015-12-16 青岛海尔电子有限公司 Information push method, information push server and intelligent television
CN105426404A (en) * 2015-10-28 2016-03-23 广东欧珀移动通信有限公司 Music information recommendation method and apparatus, and terminal
CN105956059A (en) * 2016-04-27 2016-09-21 乐视控股(北京)有限公司 Emotion recognition-based information recommendation method and apparatus
CN106250553A (en) * 2016-08-15 2016-12-21 珠海市魅族科技有限公司 A kind of service recommendation method and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577516A (en) * 2013-07-01 2014-02-12 北京百纳威尔科技有限公司 Method and device for displaying contents

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298682A (en) * 2013-07-18 2015-01-21 广州华久信息科技有限公司 Information recommendation effect evaluation method and mobile phone based on facial expression images
CN103716702A (en) * 2013-12-17 2014-04-09 三星电子(中国)研发中心 Television program recommendation device and method
CN105163139A (en) * 2014-05-28 2015-12-16 青岛海尔电子有限公司 Information push method, information push server and intelligent television
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN105426404A (en) * 2015-10-28 2016-03-23 广东欧珀移动通信有限公司 Music information recommendation method and apparatus, and terminal
CN105956059A (en) * 2016-04-27 2016-09-21 乐视控股(北京)有限公司 Emotion recognition-based information recommendation method and apparatus
CN106250553A (en) * 2016-08-15 2016-12-21 珠海市魅族科技有限公司 A kind of service recommendation method and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883280A (en) * 2021-03-25 2021-06-01 贵阳货车帮科技有限公司 Processing system and method for user recommended content
CN112883280B (en) * 2021-03-25 2023-08-04 贵阳货车帮科技有限公司 Processing system and method for user recommended content

Also Published As

Publication number Publication date
WO2019024003A1 (en) 2019-02-07

Similar Documents

Publication Publication Date Title
US20200159392A1 (en) Method of processing content and electronic device thereof
EP4128672B1 (en) Combining first user interface content into second user interface
US11328008B2 (en) Query matching to media collections in a messaging system
CN109857327B (en) Information processing apparatus, information processing method, and storage medium
CN108496150B (en) Screen capture and reading method and terminal
CN110164415B (en) Recommendation method, device and medium based on voice recognition
KR102367828B1 (en) Operating method for communication and Electronic device supporting the same
CN108494947B (en) Image sharing method and mobile terminal
WO2018072149A1 (en) Picture processing method, device, electronic device and graphic user interface
US20160283055A1 (en) Customized contextual user interface information displays
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
KR102625254B1 (en) Electronic device and method providing information associated with image to application through input unit
KR102139662B1 (en) Method and device for executing application
CN111695004B (en) Application information processing method, device, computer equipment and storage medium
US20210407506A1 (en) Augmented reality-based translation of speech in association with travel
US11477143B2 (en) Trending content view count
KR101626874B1 (en) Mobile terminal and method for transmitting contents thereof
WO2019201109A1 (en) Word processing method and apparatus, and mobile terminal and storage medium
CN111797304A (en) Content pushing method, device and equipment
CN112085568B (en) Commodity and rich media aggregation display method and equipment, electronic equipment and medium
CN105809162B (en) Method and device for acquiring WIFI hotspot and picture associated information
US9330301B1 (en) System, method, and computer program product for performing processing based on object recognition
CN110932964A (en) Information processing method and device
WO2019223484A1 (en) Information display method and apparatus, and mobile terminal and storage medium
CN111831132A (en) Information recommendation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717