WO2016127538A1 - 信息推送方法和装置 - Google Patents

信息推送方法和装置 Download PDF

Info

Publication number
WO2016127538A1
WO2016127538A1 PCT/CN2015/081695 CN2015081695W WO2016127538A1 WO 2016127538 A1 WO2016127538 A1 WO 2016127538A1 CN 2015081695 W CN2015081695 W CN 2015081695W WO 2016127538 A1 WO2016127538 A1 WO 2016127538A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
face
push
gesture
voice
Prior art date
Application number
PCT/CN2015/081695
Other languages
English (en)
French (fr)
Inventor
李阳
顾嘉唯
余凯
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to US15/322,504 priority Critical patent/US10460152B2/en
Publication of WO2016127538A1 publication Critical patent/WO2016127538A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24575Query processing with adaptation to user needs using context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/003Repetitive work cycles; Sequence of movements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0076Body hygiene; Dressing; Knot tying
    • G09B19/0084Dental hygiene
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0092Nutrition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the field of information processing technologies, and in particular, to an information push method and apparatus.
  • the existing applications based on face recognition technology focus on the following three aspects: 1. Face recognition for identity confirmation; 2. Analysis of matching with stars and similarity, and searching for similar faces; 3. Existing people The face is based on virtual beautification, and the effect of entertainment is achieved by changing faces.
  • the object of the present invention is to solve at least one of the above technical problems to some extent.
  • an object of the present invention is to provide an information push method which can improve the diversification and personalization level of information push.
  • a second object of the present invention is to provide an information push device.
  • a third object of the present invention is to provide a storage medium.
  • a fourth object of the present invention is to provide an information push device.
  • the information pushing method of the first aspect of the present invention includes: detecting face information and acquiring control information; acquiring push information according to the face information and control information; and displaying the push information.
  • the information pushing method proposed by the embodiment of the present invention can identify and analyze the state of the face by detecting the face information and acquiring the control information, thereby obtaining the push information according to the face information and the control information, and can propose various aspects of the state of the user. Improve opinions and achieve diversified and personalized information push.
  • the information pushing apparatus of the second aspect of the present invention includes: a detecting module, configured to detect face information, and acquire control information; and an acquiring module, configured to acquire according to the face information and the control information Pushing information; a display module for displaying the push information.
  • the information pushing device provided by the embodiment of the present invention can identify and analyze the state of the face by detecting the face information and acquiring the control information, thereby obtaining the push information according to the face information and the control information, and can propose various aspects of the state of the user. Improve opinions and achieve diversified and personalized information push.
  • a storage medium configured to store an application, and the application is used to execute the information pushing method according to the first aspect of the present invention.
  • an information pushing apparatus includes: one or more processors; a memory; one or more modules, wherein the one or more modules are stored in the memory when The one or more processors perform the following operations: detecting face information, and acquiring control information; acquiring push information according to the face information and control information; and displaying the push information.
  • FIG. 1 is a schematic flowchart of an information pushing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of an information pushing method according to another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of input information of a smart device according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of push information outputted by a smart device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an information pushing apparatus according to another embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an information pushing apparatus according to another embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an information pushing method according to an embodiment of the present invention, where the method includes:
  • S101 Detect face information and acquire control information.
  • the face can be detected by a smart device having a face detection function.
  • the smart device can detect the face information through the camera or the like set thereon, and the smart device can also Control information is obtained through the camera and/or other modules.
  • the detected face information may be current face information detected in real time, or the face information may also be long-term data of face information in a preset time period.
  • the control information may include: gesture information, and/or voice information, where the gesture information is obtained, for example, after the smart device captures the gesture of the user through the camera and recognizes the gesture, and the voice information is captured by the smart device, for example, by the smart device.
  • the voice is obtained after the voice is recognized.
  • the face information may include one or more of skin information, hair information, eye information, eyebrow information, nose information, tooth information, lip information, expression information, and/or makeup information.
  • the gesture information may include current state information corresponding to the automatically recognized gesture, for example, the current state corresponding to the recognized gesture is a brushing state or a massage facial state; or the gesture information of the command mode, for example, a specific gesture may be preset
  • the command enters a command mode when the specific gesture is detected, and performs a corresponding operation according to the detected gesture command. For example, if the specific gesture is set in advance, the palm touches the full face to indicate that the command is to turn on the full face detection state, and when the action of stroking the full face is captured, an instruction to turn on the full face detection state is performed, and if the specific gesture is preset, the finger is specific.
  • a part indicates that the instruction is to enlarge the part and perform detailed identification
  • an instruction to enlarge the part for detailed recognition is performed.
  • the voice information may include: voice information of the command mode, for example, a command corresponding to the specific voice content may be preset, and the command mode is entered when the voice information of the specific content is detected; or the voice information of the virtual dialogue mode, such as voice The content is "What if my eyes are bigger” "How to make my skin more delicate” and so on.
  • S102 Acquire, according to the face information and the control information, the push information.
  • the push information may include at least one of basic class information, recommended class information, tutorial class information, and the like.
  • the basic class information may include two types of non-personal basic information and personal basic information.
  • Non-personal basic information may include weather conditions, today's headlines, today's plans, etc.; personal information may include information such as changes in facial status, existing makeup, sleep quality, and the like.
  • Recommended information can be tailored to the existing situation for makeup, maintenance, clothing, accessories, diet, exercise, environment, work and rest, and guide the user through the changes before and after.
  • tutorial information can include: makeup tutorials, shaving tutorials, face washing, face-lifting, massage, skin care smear tutorials, brushing tutorials, and more.
  • the corresponding push information may be acquired according to the face information and the control information, for example, the sleep quality score or the suggestion is pushed according to the face information, the corresponding instruction is executed according to the control information, and the corresponding information is pushed.
  • the control information is gesture information
  • the gesture information is the current state information corresponding to the automatically recognized gesture, for example, identifying the current gesture corresponding to the brushing state or massaging the facial state
  • the brushing or facial massage may be pushed.
  • Information such as tutorials, notes, etc.
  • the gesture information is gesture information of the command mode, specifically, for example, when detecting the entire face When gestures, full face detection is turned on.
  • the finger refers to a certain part, the part is enlarged to perform detailed recognition.
  • control information is voice information
  • voice information when the voice information is the voice information of the command mode, the corresponding operation is performed according to the detected voice information command, for example, when the content of the detected voice information is “monitoring the eye”, “ When the lips are monitored, the corresponding eye and lip monitoring modes are turned on; if the voice information is voice information of the virtual dialogue mode, for example, when it is detected that the voice content input by the user is "if my eyes are bigger, what will happen?" At this time, the effect of virtualizing the eyes is displayed in the current face state.
  • the push information may be displayed in a static manner such as a text or an image, or may be displayed in a dynamic manner such as voice or animation, for example, by an interesting manner such as an interactive game to guide the correct brushing time and manner.
  • a static manner such as a text or an image
  • a dynamic manner such as voice or animation
  • an interesting manner such as an interactive game to guide the correct brushing time and manner.
  • the state of the face is identified and analyzed, thereby obtaining the push information according to the face information and the control information, and the user's state can be improved in various aspects to achieve diversification.
  • Personalized information push by detecting face information and acquiring control information, the state of the face is identified and analyzed, thereby obtaining the push information according to the face information and the control information, and the user's state can be improved in various aspects to achieve diversification. Personalized information push.
  • FIG. 2 is a schematic flowchart of a method for pushing information according to another embodiment of the present invention, where the method includes:
  • the embodiment can be performed by a smart device having an associated function, such as a Baidu Mirror.
  • the user can be prompted to log in. There are many ways to log in, and details are not described here. After logging in, the user can be prompted to set personal information.
  • the personal information may be historical information recorded in the smart device, or may be information manually input by the user or imported from other devices. For example, the personal information of the user on other devices may be obtained through the connection network.
  • the face information can be automatically detected in advance, and the detected face information is matched with the personal information set by the user to obtain matching personal information.
  • the user can save his or her face information when setting, and it can be understood that one or more user information can be saved in the same smart device.
  • the smart device may compare the detected face information with one or more face information saved in advance to obtain matching face information.
  • the smart device can save the currently detected face information after each match, and can obtain historical face information in subsequent use, thereby collecting the long-term face information in the preset time period. data.
  • control information includes gesture information and voice information.
  • the input information of the smart device may include automatically recognized facial information, gesture information, and language. Sound information.
  • FIG. 3 please refer to FIG. 3, which will not be enumerated here.
  • S204 Acquire push information according to the input information.
  • the push information may include basic information, recommendation information, tutorial information, etc., and may be various, and will not be enumerated here.
  • long-term data of the face information can be acquired by long-term face detection, and the push information can be given based on the long-term data.
  • the specific face information and the relationship between the control information and the push information can be determined according to a preset principle. For example, by analyzing long-term data of a face and determining that the face is not good for a long time, the basic data that can be acquired includes poor sleep during the time period.
  • display in one or more forms such as text, picture, and voice.
  • the smart home appliance can also be controlled according to the face information. For example, when the smart device detects that the skin moisture of the face is less than a preset value, the humidifier may be controlled to humidify the air, or when the smart device detects that the skin temperature is greater than a preset value, the air conditioner may be controlled to cool down.
  • the state of the face is identified and analyzed, so that the push information is obtained according to the face information and the control information, and various suggestions can be made for the state of the user, and the personalized information is improved.
  • Push quality to improve the quality of life of users.
  • statistical data and rationalization suggestions can be provided according to changes in face information to help users improve their life details.
  • the present invention also proposes an information push device.
  • FIG. 5 is a schematic structural diagram of an information pushing apparatus according to another embodiment of the present invention.
  • the information pushing device includes: a detecting module 100, an obtaining module 200, and a display module 300.
  • the detecting module 100 is configured to detect face information and acquire control information. More specifically, the detection module 100 can detect a human face through a smart device having a face detection function.
  • the smart device can detect the face information through the camera or the like set thereon, and the smart device can also obtain the control information through the camera and/or other modules.
  • the detected face information may be current face information detected in real time, or the face information may also be long-term data of face information in a preset time period.
  • the control information may include: gesture information, and/or voice information, where the gesture information is obtained, for example, after the smart device captures the gesture of the user through the camera and recognizes the gesture, and the voice information is captured by the smart device, for example, by the smart device.
  • the voice is obtained after the voice is recognized.
  • the face information may include one or more of skin information, hair information, eye information, eyebrow information, nose information, tooth information, lip information, expression information, and/or makeup information.
  • the gesture information may include current state information corresponding to the automatically recognized gesture, for example, the current state corresponding to the recognized gesture is a brushing state or a massage facial state; or the gesture information of the command mode, for example, a specific gesture may be preset
  • the command enters a command mode when the specific gesture is detected, and performs a corresponding operation according to the detected gesture command. For example, if the specific gesture is set in advance, the palm touches the full face to indicate that the command is to turn on the full face detection state, and when the action of stroking the full face is captured, an instruction to turn on the full face detection state is performed, and if the specific gesture is preset, the finger is specific.
  • a part indicates that the instruction is to enlarge the part and perform detailed identification
  • an instruction to enlarge the part for detailed recognition is performed.
  • the voice information may include: voice information of the command mode, for example, a command corresponding to the specific voice content may be preset, and the command mode is entered when the voice information of the specific content is detected; or the voice information of the virtual dialogue mode, such as voice The content is "What if my eyes are bigger” "How to make my skin more delicate” and so on.
  • the obtaining module 200 is configured to obtain the push information according to the face information and the control information.
  • the push information may include at least one of basic class information, recommended class information, tutorial class information, and the like.
  • the basic class information may include two types of non-personal basic information and personal basic information.
  • Non-personal basic information may include weather conditions, today's headlines, today's plans, etc.; personal information may include information such as changes in facial status, existing makeup, sleep quality, and the like.
  • Recommended information can be tailored to the existing situation for makeup, maintenance, clothing, accessories, diet, exercise, environment, work and rest, and guide the user through the changes before and after.
  • tutorial information can include: makeup tutorials, shaving tutorials, face washing, face-lifting, massage, skin care smear tutorials, brushing tutorials, and more.
  • the obtaining module 200 may acquire corresponding push information according to the face information and the control information, for example, push a sleep quality score or suggestion according to the face information, execute a corresponding instruction according to the control information, and push corresponding information and the like.
  • the control information is gesture information
  • the gesture information is the current state information corresponding to the automatically recognized gesture, for example, identifying the current gesture corresponding to the brushing state or massaging the facial state, the brushing or facial massage may be pushed.
  • the tutorial, the precautions, and the like when the gesture information is the gesture information of the command mode, specifically, for example, when the gesture of stroking the full face is detected, the full face detection is turned on, and when the finger specifically refers to a certain part, the execution is performed.
  • control information is voice information
  • voice information when the voice information is the voice information of the command mode, the corresponding operation is performed according to the detected voice information command, for example, when the content of the detected voice information is “monitoring the eye”, “ When the lips are monitored, the corresponding eye and lip monitoring modes are turned on; if the voice information is voice information of the virtual dialogue mode, for example, when it is detected that the voice content input by the user is "if my eyes are bigger, what will happen?" At this time, the effect of virtualizing the eyes is displayed in the current face state.
  • the presentation module 300 is configured to display the push information. More specifically, the push information may be displayed in a static manner such as text or images, or may be displayed in a dynamic manner such as voice or animation, for example, through an interactive game such as an interactive game. Guide the correct brushing time and method. There are a variety of specific display methods, which are not listed here.
  • the state of the face is identified and analyzed, thereby obtaining the push information according to the face information and the control information, and the user's state can be improved in various aspects to achieve diversification.
  • Personalized information push by detecting face information and acquiring control information, the state of the face is identified and analyzed, thereby obtaining the push information according to the face information and the control information, and the user's state can be improved in various aspects to achieve diversification. Personalized information push.
  • FIG. 6 is a schematic structural diagram of an information pushing apparatus according to another embodiment of the present invention.
  • the information pushing device includes: a detecting module 100, an obtaining module 200, a display module 300, and a control module 400.
  • the embodiment can be performed by a smart device having an associated function, such as a Baidu Mirror.
  • the user can be prompted to log in. There are many ways to log in, and details are not described here. After logging in, the user can be prompted to set personal information.
  • the personal information may be historical information recorded in the smart device, or may be information manually input by the user or imported from other devices. For example, the personal information of the user on other devices may be obtained through the connection network.
  • the face information can be automatically detected in advance, and the detected face information is matched with the personal information set by the user to obtain matching personal information.
  • the user can save his or her face information when setting, and it can be understood that one or more user information can be saved in the same smart device.
  • the smart device may compare the detected face information with one or more face information saved in advance to obtain matching face information.
  • the smart device can save the currently detected face information after each match, and can obtain historical face information in subsequent use, thereby collecting the long-term face information in the preset time period. data.
  • the information pushing device further includes: a control module 400, configured to control the smart home appliance according to the face information.
  • the control module 400 can control the smart home appliance according to the face information. For example, when the detecting module 100 detects that the skin moisture of the human face is less than a preset value, the control module 400 may control to turn on the humidifier to humidify the air, or when the detecting module 100 detects that the skin temperature is greater than a preset value, the control module 400 The air conditioner can be controlled to cool down.
  • the state of the face is identified and analyzed, so that the push information is obtained according to the face information and the control information, and various suggestions can be made for the state of the user, and the personalized information is improved.
  • Push quality to improve the quality of life of users.
  • statistical data and rationalization suggestions can be provided according to changes in face information to help users improve their life details.
  • the present invention also provides a storage medium for storing an application for performing the information pushing method according to any of the embodiments of the present invention.
  • the present invention also provides an information push device comprising: one or more processors; a memory; one or more modules, one or more modules stored in the memory when being one or more When the processor executes, the following operations are performed:
  • S101' detecting face information and acquiring control information.
  • S102' Acquire push information according to the face information and the control information.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM). Additionally, the computer readable medium may even be capable of printing the same thereon Program paper or other suitable medium, as the program can be obtained electronically, for example by optical scanning of paper or other media, followed by editing, interpretation or, if necessary, processing in other suitable manner, and then storing it In computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Nutrition Science (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提出一种信息推送方法和装置,该信息推送方法包括:检测人脸信息,以及获取控制信息;根据所述人脸信息和控制信息,获取推送信息;展示所述推送信息。该方法和装置能够提高信息推送的多样化和个性化水平。

Description

信息推送方法和装置
相关申请的交叉引用
本申请要求百度在线网络技术(北京)有限公司于2015年2月10日提交的、发明名称为“信息推送方法和装置”的、中国专利申请号“201510069649.3”的优先权。
技术领域
本发明涉及信息处理技术领域,尤其涉及一种信息推送方法和装置。
背景技术
随着信息化的发展,人脸识别技术也应用得越来越广泛。
现有的基于人脸识别技术的应用集中在以下三个方面:1、人脸识别用于身份确认;2、分析与明星的匹配以及相似程度,并搜索相似的人脸;3、在现有人脸的基础上进行虚拟美化,并通过相互换脸的方式起到娱乐的效果。
但现有技术只是识别人脸的图像或生物特征信息,把人脸作为唯一的信息输入,所以输出的结果较为单一。
发明内容
本发明的目的旨在至少在一定程度上解决上述的技术问题之一。
为此,本发明的一个目的在于提出一种信息推送方法,该方法可以提高信息推送的多样化和个性化水平。
本发明的第二个目的在于提出一种信息推送装置。
本发明的第三个目的在于提出一种存储介质。
本发明的第四个目的在于提出一种信息推送设备。
为了实现上述目的,本发明第一方面实施例的信息推送方法,包括:检测人脸信息,以及获取控制信息;根据所述人脸信息和控制信息,获取推送信息;展示所述推送信息。
本发明实施例提出的信息推送方法,通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的改善意见,实现多样化、个性化的信息推送。
为了实现上述目的,本发明第二方面实施例的信息推送装置,包括:检测模块,用于检测人脸信息,以及获取控制信息;获取模块,用于根据所述人脸信息和控制信息,获取推送信息;展示模块,用于展示所述推送信息。
本发明实施例提出的信息推送装置,通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的改善意见,实现多样化、个性化的信息推送。
为了实现上述目的,本发明第三方面实施例的存储介质,用于存储应用程序,所述应用程序用于执行本发明第一方面实施例所述的信息推送方法。
为了实现上述目的,本发明第四方面实施例的信息推送设备,包括:一个或者多个处理器;存储器;一个或者多个模块,所述一个或者多个模块存储在所述存储器中,当被所述一个或者多个处理器执行时进行如下操作:检测人脸信息,以及获取控制信息;根据所述人脸信息和控制信息,获取推送信息;展示所述推送信息。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中,
图1是本发明实施例提出的信息推送方法的流程示意图;
图2是本发明另一实施例提出的信息推送方法的流程示意图;
图3是本发明一实施例的智能设备的输入信息的示意图;
图4是本发明一实施例的智能设备的输出的推送信息的示意图;
图5是本发明另一实施例的信息推送装置的结构示意图;
图6是本发明另一实施例的信息推送装置的结构示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。相反,本发明的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。
下面参考附图描述根据本发明实施例的信息推送方法和装置。
图1是本发明一实施例提出的信息推送方法的流程示意图,该方法包括:
S101:检测人脸信息,以及获取控制信息。
具体地,可以通过具有人脸检测功能的智能设备对人脸进行检测。
其中,智能设备可以通过其上设置的摄像头等检测到人脸信息,以及,智能设备还可 以通过摄像头和/或其他模块获取控制信息。
可选的,检测的人脸信息可以是实时检测到的当前的人脸信息,或者,人脸信息也可以是预设时间段内的人脸信息的长期数据。
控制信息可以包括:手势信息,和/或,语音信息,其中,手势信息例如是智能设备通过摄像头捕捉到用户的手势并对手势进行识别后得到的,语音信息例如由智能设备通过麦克风捕捉到用户的语音并对语音进行识别后得到的。
具体的,人脸信息可以包括:皮肤信息、头发信息、眼睛信息、眉毛信息、鼻子信息、牙齿信息、嘴唇信息、表情信息和/或妆容信息等中的一项或多项。
手势信息可以包括自动识别的手势对应的当前状态信息,例如识别的手势对应的当前状态为刷牙状态或者按摩脸部状态;或者,命令模式的手势信息,例如,可以预先设置特定的手势所对应的命令,当检测到该特定的手势时进入命令模式,根据检测到的手势命令执行相应的操作。例如,预先设置特定手势是手掌抚过全脸表明命令是开启全脸检测状态,则当捕捉到抚过全脸的动作时,执行开启全脸检测状态的指令,如果预先设置特定手势是手指具体指某个部位表明指令是放大该部位并进行详细识别,则当手指具体指某个部位时,执行放大该部位进行详细识别的指令。
语音信息可以包括:命令模式的语音信息,例如,可以预先设置特定的语音内容所对应的命令,当检测到该特定内容的语音信息时进入命令模式;或者,虚拟对话模式的语音信息,例如语音内容是“如果我的眼睛大一些会怎么样”“如何让我的皮肤更加细腻”等。
S102:根据所述人脸信息和控制信息,获取推送信息。
其中,推送信息可以包括基础类信息,推荐类信息,教程类信息等中的至少一项。
基础类信息可以包括非个人基础信息和个人基础信息两类。非个人基础信息可以包括天气情况、今日头条、今日计划等;个人信息可包括对脸部状态、现有妆容、睡眠质量等方面的数据变化、评分等信息。
推荐类信息,例如,可针对现有情况形成对于妆容、保养、衣着、配饰、饮食、运动、环境、作息等方面的建议,并通过对比前后的变化引导用户完成。
教程类信息可以包括:化妆教程,刮胡子方式教程,洗脸、瘦脸、按摩、保养品涂抹方式教程,刷牙教程等。
具体地,可以根据人脸信息和控制信息获取相应的推送信息,例如根据人脸信息推送睡眠质量评分或建议,根据控制信息执行相应的指令,并推送相应的信息等。举例而言,如果控制信息是手势信息,当手势信息是自动识别的手势对应的当前状态信息时,例如识别当前的手势对应的为刷牙状态或者按摩脸部状态,则可以推送刷牙或者脸部按摩的教程、注意事项等信息;当手势信息是命令模式的手势信息时,具体例如,当检测到抚过全脸的 手势时,则开启全脸检测,当手指具体指在某个部位时,执行放大该部位进行详细识别。如果控制信息是语音信息,当语音信息是命令模式的语音信息时,则根据检测到的语音信息命令执行相应的操作,具体例如,当检测到的语音信息的内容是“监测眼部”、“监测嘴唇”时,则开启对应的眼部、嘴唇监测模式;如果语音信息是虚拟对话模式的语音信息,例如,当检测到用户输入的语音内容是“如果我的眼睛大一些会怎么样”,此时则在现在脸部状态显示把眼睛虚拟放大的效果。
S103:展示所述推送信息。
具体地,推送信息可以通过文字或图像等静态方式展现,也可以通过语音或动画等动态方式进行展现,例如通过互动游戏等趣味性方式引导正确的刷牙时间和方式等。具体的展示方式还有多种,在此不再一一列举。
本实施例通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的改善意见,实现多样化、个性化的信息推送。
图2是本发明另一实施例提出的信息推送方法的流程示意图,该方法包括:
S201:设置个人信息。
具体地,本实施例可以由具有相关功能的智能设备来执行,例如百度魔镜。
在智能设备开启后,可以提示用户登陆,登录方式可以有多种,在此不再赘述。登陆后可提示用户设置个人信息。个人信息可以是智能设备中记录的历史信息,也可以是用户手动输入或者从其他设备中导入的信息,例如可以通过连接网络获取用户在其他设备上的个人信息。
S202:检测人脸信息,并匹配个人信息。
具体地,可以预先设置自动检测人脸信息,并将检测到的人脸信息与用户设置的个人信息进行匹配,得到匹配的个人信息。例如,用户在设置时可以保存自身的人脸信息,可以理解的是,在同一个智能设备中可以保存一个或者多个用户的信息。智能设备在检测到人脸信息后,可以将检测到的人脸信息与预先保存的一个或多个人脸信息进行比对,得到匹配的人脸信息。
可以理解的是,智能设备可以在每次匹配后将当前检测到的人脸信息进行保存,在后续使用时可以获取历史人脸信息,从而可以收集到预设时间段内的人脸信息的长期数据。
S203:识别当前的控制信息。
例如,控制信息包括手势信息和语音信息。
如图3所示,智能设备的输入信息可以包括自动识别的脸部信息,手势信息,语 音信息。具体可参见图3,在此不再一一列举。
S204:根据输入信息,获取推送信息。
如图4所示,推送信息可以包括基础信息、推荐信息、教程类信息等,具体可以有多种,在此不再一一列举。在一个具体地实施例中,可以通过长时间的人脸检测,获取人脸信息的长期数据,根据该长期数据可以给出推送信息。可以理解的是,具体的人脸信息以及控制信息与推送信息的相互关系可以根据预先设置的原则确定。例如,通过分析人脸的长期数据,确定长时间脸色不佳,则可以获取的基础数据包括该时间段内睡眠不佳等。
S205:展示推送信息。
例如,以文字,图片,语音等一种或多种形式进行展示。
另一实施例中,在检测到人脸信息后,还可以根据人脸信息对智能家电进行控制。例如,当智能设备检测到人脸的皮肤湿度小于预设值时,可以控制开启加湿器对空气进行加湿,或者,当智能设备检测到皮肤温度大于预设值时可以控制空调进行降温等。
本实施例通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的建议,提高个性化信息推送的质量,提高用户生活质量。另外,可以通过收集预设时间段内的人脸信息的长期数据,根据人脸信息的变化,提供统计数据和合理化建议,帮助用户改善生活细节。另外,还可以根据人脸信息实现对智能家电的控制。
为了实现上述实施例,本发明还提出一种信息推送装置。
图5是本发明另一实施例的信息推送装置的结构示意图。如图5所示,该信息推送装置包括:检测模块100、获取模块200和展示模块300。
具体地,检测模块100用于检测人脸信息,以及获取控制信息。更具体地,检测模块100可以通过具有人脸检测功能的智能设备对人脸进行检测。
其中,智能设备可以通过其上设置的摄像头等检测到人脸信息,以及,智能设备还可以通过摄像头和/或其他模块获取控制信息。
可选的,检测的人脸信息可以是实时检测到的当前的人脸信息,或者,人脸信息也可以是预设时间段内的人脸信息的长期数据。
控制信息可以包括:手势信息,和/或,语音信息,其中,手势信息例如是智能设备通过摄像头捕捉到用户的手势并对手势进行识别后得到的,语音信息例如由智能设备通过麦克风捕捉到用户的语音并对语音进行识别后得到的。
更具体的,人脸信息可以包括:皮肤信息、头发信息、眼睛信息、眉毛信息、鼻子信息、牙齿信息、嘴唇信息、表情信息和/或妆容信息等中的一项或多项。
手势信息可以包括自动识别的手势对应的当前状态信息,例如识别的手势对应的当前状态为刷牙状态或者按摩脸部状态;或者,命令模式的手势信息,例如,可以预先设置特定的手势所对应的命令,当检测到该特定的手势时进入命令模式,根据检测到的手势命令执行相应的操作。例如,预先设置特定手势是手掌抚过全脸表明命令是开启全脸检测状态,则当捕捉到抚过全脸的动作时,执行开启全脸检测状态的指令,如果预先设置特定手势是手指具体指某个部位表明指令是放大该部位并进行详细识别,则当手指具体指某个部位时,执行放大该部位进行详细识别的指令。
语音信息可以包括:命令模式的语音信息,例如,可以预先设置特定的语音内容所对应的命令,当检测到该特定内容的语音信息时进入命令模式;或者,虚拟对话模式的语音信息,例如语音内容是“如果我的眼睛大一些会怎么样”“如何让我的皮肤更加细腻”等。
获取模块200用于根据所述人脸信息和控制信息,获取推送信息。其中,推送信息可以包括基础类信息,推荐类信息,教程类信息等中的至少一项。
基础类信息可以包括非个人基础信息和个人基础信息两类。非个人基础信息可以包括天气情况、今日头条、今日计划等;个人信息可包括对脸部状态、现有妆容、睡眠质量等方面的数据变化、评分等信息。
推荐类信息,例如,可针对现有情况形成对于妆容、保养、衣着、配饰、饮食、运动、环境、作息等方面的建议,并通过对比前后的变化引导用户完成。
教程类信息可以包括:化妆教程,刮胡子方式教程,洗脸、瘦脸、按摩、保养品涂抹方式教程,刷牙教程等。
更具体地,获取模块200可以根据人脸信息和控制信息获取相应的推送信息,例如根据人脸信息推送睡眠质量评分或建议,根据控制信息执行相应的指令,并推送相应的信息等。举例而言,如果控制信息是手势信息,当手势信息是自动识别的手势对应的当前状态信息时,例如识别当前的手势对应的为刷牙状态或者按摩脸部状态,则可以推送刷牙或者脸部按摩的教程、注意事项等信息;当手势信息是命令模式的手势信息时,具体例如,当检测到抚过全脸的手势时,则开启全脸检测,当手指具体指在某个部位时,执行放大该部位进行详细识别。如果控制信息是语音信息,当语音信息是命令模式的语音信息时,则根据检测到的语音信息命令执行相应的操作,具体例如,当检测到的语音信息的内容是“监测眼部”、“监测嘴唇”时,则开启对应的眼部、嘴唇监测模式;如果语音信息是虚拟对话模式的语音信息,例如,当检测到用户输入的语音内容是“如果我的眼睛大一些会怎么样”,此时则在现在脸部状态显示把眼睛虚拟放大的效果。
展示模块300用于展示所述推送信息。更具体地,推送信息可以通过文字或图像等静态方式展现,也可以通过语音或动画等动态方式进行展现,例如通过互动游戏等趣味性方 式引导正确的刷牙时间和方式等。具体的展示方式还有多种,在此不再一一列举。
本实施例通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的改善意见,实现多样化、个性化的信息推送。
图6是本发明另一实施例的信息推送装置的结构示意图。如图6所示,该信息推送装置包括:检测模块100、获取模块200、展示模块300和控制模块400。
具体地,本实施例可以由具有相关功能的智能设备来执行,例如百度魔镜。
在智能设备开启后,可以提示用户登陆,登录方式可以有多种,在此不再赘述。登陆后可提示用户设置个人信息。个人信息可以是智能设备中记录的历史信息,也可以是用户手动输入或者从其他设备中导入的信息,例如可以通过连接网络获取用户在其他设备上的个人信息。
更具体地,可以预先设置自动检测人脸信息,并将检测到的人脸信息与用户设置的个人信息进行匹配,得到匹配的个人信息。例如,用户在设置时可以保存自身的人脸信息,可以理解的是,在同一个智能设备中可以保存一个或者多个用户的信息。智能设备在检测到人脸信息后,可以将检测到的人脸信息与预先保存的一个或多个人脸信息进行比对,得到匹配的人脸信息。
可以理解的是,智能设备可以在每次匹配后将当前检测到的人脸信息进行保存,在后续使用时可以获取历史人脸信息,从而可以收集到预设时间段内的人脸信息的长期数据。
在上一实施例的基础上,该信息推送装置还包括:控制模块400,用于根据所述人脸信息,对智能家电进行控制。
在检测到人脸信息后,控制模块400可以根据人脸信息对智能家电进行控制。例如,当检测模块100检测到人脸的皮肤湿度小于预设值时,控制模块400可以控制开启加湿器对空气进行加湿,或者,当检测模块100检测到皮肤温度大于预设值时控制模块400可以控制空调进行降温等。
本实施例通过检测人脸信息和获取控制信息,对人脸的状态进行识别和分析,从而根据人脸信息和控制信息获取推送信息,能够对用户的状态提出多方面的建议,提高个性化信息推送的质量,提高用户生活质量。另外,可以通过收集预设时间段内的人脸信息的长期数据,根据人脸信息的变化,提供统计数据和合理化建议,帮助用户改善生活细节。另外,还可以根据人脸信息实现对智能家电的控制。
为了实现上述实施例,本发明还提出了一种存储介质,用于存储应用程序,该应用程序用于执行本发明任一个实施例所述的信息推送方法。
为了实现上述实施例,本发明还提出了一种信息推送设备,包括:一个或者多个处理器;存储器;一个或者多个模块,一个或者多个模块存储在存储器中,当被一个或者多个处理器执行时进行如下操作:
S101’:检测人脸信息,以及获取控制信息。
S102’:根据所述人脸信息和控制信息,获取推送信息。
S103’:展示所述推送信息。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述 程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (16)

  1. 一种信息推送方法,其特征在于,包括:
    检测人脸信息,以及获取控制信息;
    根据所述人脸信息和控制信息,获取推送信息;
    展示所述推送信息。
  2. 根据权利要求1所述的方法,其特征在于,所述检测人脸信息,包括:
    收集预设时间段内的人脸信息的长期数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述检测人脸信息之后,所述方法还包括:
    根据所述人脸信息,对智能家电进行控制。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述获取控制信息,包括:
    获取手势信息,和/或,获取语音信息。
  5. 根据权利要求4所述的方法,其特征在于,所述手势信息包括:
    自动识别的手势对应的当前状态信息;或者,
    命令模式的手势信息。
  6. 根据权利要求4或5所述的方法,其特征在于,所述语音信息包括:
    命令模式的语音信息;或者,
    虚拟对话模式的语音信息。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述推送信息包括如下项中的至少一项,包括:
    基础类信息,推荐类信息,教程类信息。
  8. 一种信息推送装置,其特征在于,包括:
    检测模块,用于检测人脸信息,以及获取控制信息;
    获取模块,用于根据所述人脸信息和控制信息,获取推送信息;
    展示模块,用于展示所述推送信息。
  9. 根据权利要求8所述的装置,其特征在于,所述检测模块还用于收集预设时间段内的人脸信息的长期数据。
  10. 根据权利要求8或9所述的装置,其特征在于,所述装置还包括:
    控制模块,用于根据所述人脸信息,对智能家电进行控制。
  11. 根据权利要求8-10任一项所述的装置,其特征在于,所述检测模块还用于获取手势信息,和/或,获取语音信息。
  12. 根据权利要求11所述的装置,其特征在于,所述手势信息包括:
    自动识别的手势对应的当前状态信息;或者,
    命令模式的手势信息。
  13. 根据权利要求11或12所述的装置,其特征在于,所述语音信息包括:
    命令模式的语音信息;或者,
    虚拟对话模式的语音信息。
  14. 根据权利要求8-13任一项所述的装置,其特征在于,所述推送信息包括如下项中的至少一项,包括:
    基础类信息,推荐类信息,教程类信息。
  15. 一种存储介质,其特征在于,用于存储应用程序,所述应用程序用于执行权利要求1至7中任一项所述的信息推送方法。
  16. 一种信息推送设备,其特征在于,包括:
    一个或者多个处理器;
    存储器;
    一个或者多个模块,所述一个或者多个模块存储在所述存储器中,当被所述一个或者多个处理器执行时进行如下操作:
    检测人脸信息,以及获取控制信息;
    根据所述人脸信息和控制信息,获取推送信息;
    展示所述推送信息。
PCT/CN2015/081695 2015-02-10 2015-06-17 信息推送方法和装置 WO2016127538A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/322,504 US10460152B2 (en) 2015-02-10 2015-06-17 Method and apparatus for pushing information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510069649.3A CN104679839B (zh) 2015-02-10 2015-02-10 信息推送方法和装置
CN201510069649.3 2015-02-10

Publications (1)

Publication Number Publication Date
WO2016127538A1 true WO2016127538A1 (zh) 2016-08-18

Family

ID=53314881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/081695 WO2016127538A1 (zh) 2015-02-10 2015-06-17 信息推送方法和装置

Country Status (3)

Country Link
US (1) US10460152B2 (zh)
CN (1) CN104679839B (zh)
WO (1) WO2016127538A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679839B (zh) 2015-02-10 2018-09-07 百度在线网络技术(北京)有限公司 信息推送方法和装置
CN105354334B (zh) * 2015-11-27 2019-04-26 广州视源电子科技股份有限公司 一种基于智能镜子的信息发布方法和智能镜子
US11544274B2 (en) * 2016-07-18 2023-01-03 Disney Enterprises, Inc. Context-based digital assistant
CN106250541A (zh) * 2016-08-09 2016-12-21 珠海市魅族科技有限公司 一种信息的推送方法及装置
CN106412007A (zh) * 2016-08-26 2017-02-15 珠海格力电器股份有限公司 智能终端的消息推送方法和装置
CN107818110A (zh) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 一种信息推荐方法、装置
CN106648079A (zh) * 2016-12-05 2017-05-10 华南理工大学 一种基于人脸识别与手势交互的电视娱乐系统
CN107038413A (zh) * 2017-03-08 2017-08-11 合肥华凌股份有限公司 食谱推荐方法、装置及冰箱
CN107018135A (zh) * 2017-04-06 2017-08-04 深圳天珑无线科技有限公司 一种信息推送的方法、终端及服务器
CN107483739B (zh) * 2017-08-24 2020-08-07 北京小米移动软件有限公司 剃须提醒方法、装置及存储介质
CN108509046A (zh) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 智能家居设备控制方法和装置
CN109088924A (zh) * 2018-07-31 2018-12-25 西安艾润物联网技术服务有限责任公司 服务信息推送方法、相关装置以及存储介质
CN109067883B (zh) * 2018-08-10 2021-06-29 珠海格力电器股份有限公司 信息推送方法及装置
CN109597930B (zh) * 2018-10-24 2023-10-13 创新先进技术有限公司 一种信息的推荐方法、装置及设备
CN111198505A (zh) * 2018-11-20 2020-05-26 青岛海尔洗衣机有限公司 家用电器输出视听信息的控制方法
CN111198506A (zh) * 2018-11-20 2020-05-26 青岛海尔洗衣机有限公司 家用电器输出视听信息的控制方法
CN109934731A (zh) * 2019-01-25 2019-06-25 广州富港万嘉智能科技有限公司 一种基于图像识别的点餐方法、电子设备及存储介质
CN110534172B (zh) * 2019-08-23 2022-07-19 青岛海尔科技有限公司 基于智能家居操作系统的口腔清洁的提醒方法及装置
KR20210035968A (ko) * 2019-09-24 2021-04-02 엘지전자 주식회사 사용자의 표정이나 발화를 고려하여 마사지 동작을 제어하는 인공 지능 마사지 장치 및 그 방법
CN111158248A (zh) * 2019-12-16 2020-05-15 珠海格力电器股份有限公司 一种智能家居反馈控制方法、计算机可读存储介质及家电

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8305188B2 (en) * 2009-10-07 2012-11-06 Samsung Electronics Co., Ltd. System and method for logging in multiple users to a consumer electronics device by detecting gestures with a sensory device
CN103345232A (zh) * 2013-07-15 2013-10-09 孟凡忠 个性化智能家居控制方法及系统
CN103631380A (zh) * 2013-12-03 2014-03-12 武汉光谷信息技术股份有限公司 一种人机交互数据的处理方法及其控制系统
CN103729585A (zh) * 2013-12-06 2014-04-16 南通芯迎设计服务有限公司 一种家庭自动化系统
CN104679839A (zh) * 2015-02-10 2015-06-03 百度在线网络技术(北京)有限公司 信息推送方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101317185B (zh) * 2005-10-05 2014-03-19 高通股份有限公司 基于视频传感器的自动关注区检测
US20120059850A1 (en) * 2010-09-06 2012-03-08 Jonathan Binnings Bent Computerized face photograph-based dating recommendation system
JP5917841B2 (ja) * 2011-06-15 2016-05-18 日産自動車株式会社 気分判定装置及び気分判定装置の作動方法
CN102523502A (zh) * 2011-12-15 2012-06-27 四川长虹电器股份有限公司 智能电视交互系统及交互方法
CN103325089B (zh) * 2012-03-21 2016-08-03 腾讯科技(深圳)有限公司 图像中的肤色处理方法及装置
CN103970260B (zh) * 2013-01-31 2017-06-06 华为技术有限公司 一种非接触式手势控制方法及电子终端设备
CN103823393A (zh) * 2014-02-13 2014-05-28 宇龙计算机通信科技(深圳)有限公司 智能家电的控制方法和控制设备
CN104038836A (zh) * 2014-06-03 2014-09-10 四川长虹电器股份有限公司 电视节目智能推送的方法
US9747573B2 (en) * 2015-03-23 2017-08-29 Avatar Merger Sub II, LLC Emotion recognition for workforce analytics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8305188B2 (en) * 2009-10-07 2012-11-06 Samsung Electronics Co., Ltd. System and method for logging in multiple users to a consumer electronics device by detecting gestures with a sensory device
CN103345232A (zh) * 2013-07-15 2013-10-09 孟凡忠 个性化智能家居控制方法及系统
CN103631380A (zh) * 2013-12-03 2014-03-12 武汉光谷信息技术股份有限公司 一种人机交互数据的处理方法及其控制系统
CN103729585A (zh) * 2013-12-06 2014-04-16 南通芯迎设计服务有限公司 一种家庭自动化系统
CN104679839A (zh) * 2015-02-10 2015-06-03 百度在线网络技术(北京)有限公司 信息推送方法和装置

Also Published As

Publication number Publication date
US10460152B2 (en) 2019-10-29
CN104679839A (zh) 2015-06-03
CN104679839B (zh) 2018-09-07
US20180218199A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
WO2016127538A1 (zh) 信息推送方法和装置
JP4481663B2 (ja) 動作認識装置、動作認識方法、機器制御装置及びコンピュータプログラム
JP5837991B2 (ja) 認証型ジェスチャ認識
TWI674516B (zh) 動畫顯示方法及人機交互裝置
US9241620B1 (en) User aware digital vision correction
US9195815B2 (en) Systems and methods for automated selection of a restricted computing environment based on detected facial age and/or gender
US20160085565A1 (en) Dynamic multi-user computer configuration settings
US10168854B2 (en) User aware digital vision correction
JP2018534649A (ja) 対象物を自動的に取り込むための方法及び装置、並びに記憶媒体
US20160086020A1 (en) Apparatus and method of user interaction
TWI621999B (zh) 一種人臉檢測方法
BR112014027343B1 (pt) Método para receber entrada em um dispositivo sensível ao toque, dispositivo de armazenamento não transitório legível por computador e sistema de detecção de entrada
TW201937344A (zh) 智慧型機器人及人機交互方法
WO2018214115A1 (zh) 一种评价脸妆的方法及装置
CN108875785A (zh) 基于行为特征对比的关注度检测方法以及装置
Pandey et al. Acceptability of speech and silent speech input methods in private and public
WO2020215722A1 (zh) 视频处理方法和装置、电子设备及计算机可读存储介质
US20190377791A1 (en) Natural language generation pattern enhancement
CN108519819A (zh) 智能设备的处理方法、装置、智能设备及介质
US20200202131A1 (en) Analysis and feedback system for personal care routines
CN110286771A (zh) 交互方法、装置、智能机器人、电子设备及存储介质
CN108628454B (zh) 基于虚拟人的视觉交互方法及系统
CN103984415B (zh) 一种信息处理方法及电子设备
CN115686199B (zh) 一种群体眼动轨迹生成方法、装置、计算设备及存储介质
US20160321356A1 (en) A device and a method for establishing a personal digital profile of a user

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15881699

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15322504

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15881699

Country of ref document: EP

Kind code of ref document: A1