CN112633067A - Intelligent system for collecting household information and user emotion and identification method - Google Patents
Intelligent system for collecting household information and user emotion and identification method Download PDFInfo
- Publication number
- CN112633067A CN112633067A CN202011327099.8A CN202011327099A CN112633067A CN 112633067 A CN112633067 A CN 112633067A CN 202011327099 A CN202011327099 A CN 202011327099A CN 112633067 A CN112633067 A CN 112633067A
- Authority
- CN
- China
- Prior art keywords
- information
- user
- module
- control module
- health
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 40
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000036541 health Effects 0.000 claims abstract description 47
- 230000014509 gene expression Effects 0.000 claims abstract description 39
- 230000003993 interaction Effects 0.000 claims abstract description 25
- 230000007613 environmental effect Effects 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 9
- 230000001815 facial effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 5
- 230000036772 blood pressure Effects 0.000 claims description 4
- 239000000779 smoke Substances 0.000 claims description 4
- 239000008280 blood Substances 0.000 claims description 3
- 210000004369 blood Anatomy 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000008909 emotion recognition Effects 0.000 claims description 2
- 230000008921 facial expression Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 description 5
- 230000002996 emotional effect Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0423—Input/output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25257—Microcontroller
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Public Health (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pathology (AREA)
- Automation & Control Theory (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention provides an intelligent system for collecting household information and user emotion and an identification method. The invention solves the technical problems that the prior art has single function, can not collect the family environment information or the user health information and has poor interaction capability, can acquire corresponding content from the cloud platform after identifying the user command and present the content to the user, has more functions except acquiring the environment information and the health information and has stronger expandability; the human-computer interaction capability is strong, the human-computer interaction is more humanized, the head portrait and the voice information of the user can be captured, the expression of the user and the voice command of the user can be obtained by further analyzing the head portrait and the voice information, the expression and the voice command of the user can be well fed back, and the use of the user is facilitated; the environment information of the user's home and the health information of the user can be collected, so that the user can more comprehensively know the information of the environment where the user is and the physical condition of the user.
Description
Technical Field
The invention relates to the technical field of electronic communication, in particular to an intelligent system for collecting household information and user emotion and an identification method.
Background
At present, more and more electronic equipment gets into the family, for people's daily life institute serves, makes things convenient for people's life, but often single equipment can only provide single function such as: entertainment, alarm, indoor environment information acquisition and human body physiological index acquisition. If a plurality of functions are required to be used, a plurality of settings are required to be purchased, which increases the expenditure of people on the electronic equipment on one hand, and causes inconvenience in use due to the large number of equipment and high learning cost on the other hand.
With the rapid development of computer image recognition technology, artificial intelligence technology and other emerging technologies, computers play an indispensable role in human production and life, but machines such as computers, game machines and intelligent robots lack emotional communication with humans based on human expressions. In the prior art, although certain improvement and progress are made in the aspect of automatically identifying facial expressions by a computer, quantitative and accurate machine or virtual character and natural human emotion interaction conception and design are lacked. Nowadays, people increasingly depend on computers, mobile phones, tablet computers and other devices in work and daily life, and the time facing the devices even exceeds the time of communicating with natural people. If these devices are not user-friendly, users will easily feel solitary, tired, and tired, so that the happiness index is reduced, the working efficiency is reduced, and serious people may cause mental or psychological problems. Therefore, it is feasible and necessary to make these devices "understand" the expression of the user like "natural people" by technical means and give a response in a human-friendly manner. Although the interaction technology between the human and the emotional robot has achieved good results and the control technology for the home environment is rapidly developed, the requirements of different emotional states and behavior actions of the user on the home environment are not considered in the aspect of intelligent home environment control, the intelligent degree is low, the requirements of people on digitization and intelligence cannot be met at present, and the satisfaction degree and the trust degree of user experience are reduced.
Disclosure of Invention
The invention aims to solve the technical problems that the prior art has single function, can not collect family environment information or user health information and has poor interaction capability, provides an intelligent identification system for collecting family information and user emotion, and solves the technical problems;
the utility model provides an gather intelligent recognition system of domestic information and user's emotion, includes the robot, be equipped with the operation arm on the robot, the robot includes control module, environmental sensor module, health sensor module, power module, camera module, stereo set module and display module:
the control module is used for controlling the activity of the operation arm, acquiring environmental information and/or health information of a user through the environmental sensor module and the health sensor module, performing structured preprocessing on the acquired environmental information and/or the health information of the user, and transmitting the information after the structured preprocessing to the cloud platform;
the power supply module is in signal connection with the control module and is used for providing energy for the control module;
the camera module is in signal connection with the control module and is used for acquiring the head portrait information of the user according to the human eye imaging principle and transmitting the head portrait information to the control module;
the sound module is in signal connection with the control module and is used for collecting sound information of a user and transmitting the sound information to the control module;
and the display module is in signal connection with the control module and is used for displaying prompt information corresponding to the key words when the extracted key words are instructions for acquiring health information.
In the above intelligent identification system for collecting household information and user emotion, the environment sensor module comprises a temperature sensor, a humidity sensor, a smoke sensor or a light sensor for collecting environment information.
According to the intelligent identification system for collecting the household information and the user emotion, the health sensor module comprises a heart rate sensor, a blood pressure sensor or a blood sugar sensor for collecting the health information.
According to the intelligent identification system for collecting the household information and the user emotion, the expression information comprises laughing, worrying, anger, surprise and no expression.
In the intelligent identification system for collecting the household information and the user emotion, the control module is further used for controlling the display module and the sound module to execute corresponding operations according to the expression information of the user.
According to the intelligent identification system for collecting the household information and the user emotion, the robot body further comprises a wireless communication module for realizing interaction between the household information collection and user emotion identification equipment and the cloud platform.
The invention also aims to update the interactive function in time so as to enhance the interactive experience with the user, accurately identify the voice information and expression information of the user, and feed back the user through the internal logic library, thereby achieving the purpose of emotion accompanying.
The technical problem solved by the invention can be realized by adopting the following technical scheme: a method for collecting household information and user emotion by intelligent recognition equipment comprises the following steps:
step A, judging whether an interaction request from a user is received, if so, starting a camera module and a sound module according to the interaction request to receive interaction information from the user;
step B, recognizing the sound information collected by the sound module as characters, extracting keywords of the characters, judging whether the keywords exist in the logic library or not, and otherwise, finishing the process;
step C, if the keyword exists in the logic library, judging whether the extracted keyword is an instruction for acquiring health information;
step D, displaying prompt information corresponding to the keyword to inform a user to use a health sensor module to execute corresponding health information acquisition operation, and displaying an acquired result after the acquisition is finished;
step E, processing the collected head portrait information to obtain depth information and a color image of the user, performing redundancy removal processing on the collected depth information and the collected color image to accurately identify the facial state of the user, and matching the facial state of the user with an expression library to determine expression information of the user;
step F, controlling the display module and the sound module to execute corresponding operations according to the expression information of the user;
g, controlling an environment sensor module to acquire environment information of a user, transmitting the environment information to a control module, and displaying push information from a cloud platform;
and H, carrying out structural preprocessing on the received environment information, and transmitting the information subjected to the structural preprocessing to the cloud platform.
The step F specifically comprises the following steps: if the expression information is smiling, displaying a smiling expression to the user and playing cheerful music; if the user is worried, the user is displayed with smiling expressions, relaxed music is played, and meanwhile, the operation arm is controlled to be opened to try to embrace the user; if the user is angry, displaying smile expressions to the user, and speaking to the user to calm the user; if the user is surprised, smiling expressions are displayed to the user, the user is spoken, and the operation arms are controlled to do a smooth motion, specifically, two hands move downwards in front of the chest to sooth the user, and if the user is not expressive, feedback is not performed.
In the method for acquiring the household information and the user emotion by the intelligent identification device, step B includes step B1, if an interaction request from the user is not received, an environment information acquisition request is sent to the control module, a request for updating the expression library and the logic library in the control module is sent to the cloud platform, and meanwhile, push information from the cloud platform is received.
In the method for acquiring the household information and the user emotion by the intelligent identification device, the interactive information in the step A comprises the head portrait information and the sound information of the user.
In the method for acquiring the household information and the user emotion by the intelligent identification device, the step D includes a step D1, and if the extracted keyword is not the instruction for acquiring the health information, corresponding content is acquired from a remote cloud platform according to the extracted keyword, and the content is displayed to the user.
By adopting the technical scheme, the intelligent system for acquiring the household information and the user emotion and the identification method have various and rich functions, and can acquire corresponding content from the cloud platform after identifying the command of the user and present the content to the user, so that the intelligent system has more functions except for acquiring the environmental information and the health information, and has stronger expandability; the human-computer interaction capability is strong, the human-computer interaction is more humanized, the head portrait and the voice information of the user can be captured, the expression of the user and the voice command of the user can be obtained by further analyzing the head portrait and the voice information, the expression and the voice command of the user can be well fed back, and the use of the user is facilitated; the environment information of the user's home and the health information of the user can be collected, so that the user can more comprehensively know the information of the environment where the user is and the physical condition of the user.
Detailed Description
The technical solutions of the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments described herein without the need for inventive work, are within the scope of the present invention.
The invention provides an intelligent identification system for collecting household information and user emotion, which comprises a robot body, wherein an operating arm is arranged on the robot body, and the robot body comprises a control module, an environment sensor module, a health sensor module, a power supply module, a camera module, a sound module and a display module:
the control module is used for controlling the activity of the operation arm, acquiring environmental information and/or health information of a user through the environmental sensor module and the health sensor module, performing structured preprocessing on the acquired environmental information and/or the health information of the user, and transmitting the information after the structured preprocessing to the cloud platform, wherein the structured preprocessing is to convert the environmental information and/or the health information of the user into an XML format;
the power supply module is in signal connection with the control module and is used for providing energy for the control module;
the camera module is in signal connection with the control module and is used for acquiring head portrait information of a user according to a human eye imaging principle and transmitting the head portrait information to the control module, the control module is used for processing the head portrait information acquired by the camera module to obtain depth information and a color image of the user, and performing redundancy removal processing on the acquired depth information and the color image to accurately identify the facial state of the user and match the facial state of the user with an expression library stored in the control module to determine expression information of the user;
the sound system comprises a sound module, a control module and a network card, wherein the sound module is in signal connection with the control module and is used for acquiring sound information of a user and transmitting the sound information to the control module, the control module is used for identifying the sound information into characters, extracting keywords from the characters, controlling the display module to display prompt information corresponding to the keywords when the extracted keywords are instructions for acquiring health information so as to inform the user to use a health sensor to execute corresponding operation for acquiring the health information, displaying acquired results after the acquisition is finished, controlling the network card to acquire corresponding content from a remote cloud platform according to the extracted keywords when the extracted keywords are not instructions for acquiring the health information, and controlling the display module to display the content to the user;
and the display module is in signal connection with the control module and is used for displaying prompt information corresponding to the key words when the extracted key words are instructions for acquiring the health information.
Further, in a preferred embodiment of the intelligent recognition system for collecting home information and user emotion, the environment sensor module includes a temperature sensor, a humidity sensor, a smoke sensor or a light sensor for collecting environment information.
Further, in a preferred embodiment of the intelligent recognition system for collecting home information and user emotion, the health sensor module comprises a heart rate sensor, a blood pressure sensor or a blood sugar sensor for collecting health information.
Further, in a preferred embodiment of the intelligent recognition system for collecting home information and user emotion, the expression information includes smiling, worrying, anger, surprise and blankness.
Further, in a preferred embodiment of the intelligent recognition system for collecting home information and user emotion, the control module is further configured to control the display module and the sound module to execute corresponding operations according to the expression information of the user.
Further, in a preferred embodiment of the intelligent recognition system for collecting the household information and the user emotion, the robot body further comprises a wireless communication module for realizing interaction between the household information collection and user emotion recognition equipment and the cloud platform.
The intelligent identification system for collecting the household information and the user emotion is arranged in a user home, is in communication connection with a remote client through a cloud platform and is used for transmitting collected data to the remote client, the cloud platform is a background server and is used for storing and processing the data collected by the household information collection and user emotion identification equipment, and the remote client is a mobile terminal such as a mobile phone, a notebook computer and a PDA and can be used for remotely controlling the multifunctional household information collection and user emotion identification equipment and/or checking sensing information provided by the household information collection and user emotion identification equipment.
A method for collecting household information and user emotion by intelligent recognition equipment comprises the following steps:
step A, judging whether an interaction request from a user is received, if so, starting a camera module and a sound module according to the interaction request to receive interaction information from the user;
step B, recognizing the sound information collected by the sound module as characters, extracting keywords of the characters, judging whether the keywords exist in the logic library or not, and otherwise, finishing the process;
step C, if the keywords exist in the logic library, judging whether the extracted keywords are instructions for acquiring health information, specifically, the instructions for acquiring the health information are instructions for measuring blood pressure, measuring heartbeat and the like;
step D, displaying prompt information corresponding to the keyword to inform a user to use a health sensor module to execute corresponding health information acquisition operation, and displaying an acquired result after the acquisition is finished;
step E, processing the collected head portrait information to obtain depth information and a color image of the user, performing redundancy removal processing on the collected depth information and the collected color image to accurately identify the facial state of the user, and matching the facial state of the user with an expression library to determine expression information of the user;
step F, controlling the display module and the sound module to execute corresponding operations according to the expression information of the user;
step G, controlling an environment sensor module to collect environment information of a user, transmitting the environment information to a control module, and displaying pushing information from a cloud platform, wherein the environment information comprises temperature, humidity, smoke and the like;
and H, carrying out structural preprocessing on the received environment information, and transmitting the information subjected to the structural preprocessing to the cloud platform.
Wherein the step F specifically comprises the following steps: if the expression information is smiling, displaying a smiling expression to the user and playing cheerful music; if the user is worried, the user is displayed with smiling expressions, relaxed music is played, and meanwhile, the operation arm is controlled to be opened to try to embrace the user; if the user is angry, displaying smile expressions to the user, and speaking to the user to calm the user; if the user is surprised, smiling expressions are displayed to the user, the user is spoken, and the operation arms are controlled to do a smooth motion, specifically, two hands move downwards in front of the chest to sooth the user, and if the user is not expressive, feedback is not performed.
Further, in a preferred embodiment of the method for collecting home information and user emotion by using an intelligent recognition device of the present invention, step B includes step B1, and if an interaction request from a user is not received, a request for collecting environmental information is sent to a control module, a request for updating an expression library and a logic library in the control module is sent to a cloud platform, and push information from the cloud platform is received.
Further, in a preferred embodiment of the method for collecting home information and emotion of a user by using an intelligent recognition device of the present invention, the interactive information in step a includes head portrait information and voice information of the user.
Further, in a preferred embodiment of the method for collecting home information and user emotion by using an intelligent recognition device of the present invention, step D includes step D1, if the extracted keyword is not an instruction for collecting health information, acquiring corresponding content from a remote cloud platform according to the extracted keyword, and displaying the content to the user.
The intelligent system for acquiring the household information and the user emotion and the identification method have various and rich functions, and can acquire corresponding content from the cloud platform after identifying the command of the user and present the content to the user, so that the intelligent system has more functions except for acquiring the environmental information and the health information and has stronger expandability; the human-computer interaction capability is strong, the human-computer interaction is more humanized, the head portrait and the voice information of the user can be captured, the expression of the user and the voice command of the user can be obtained by further analyzing the head portrait and the voice information, the expression and the voice command of the user can be well fed back, and the use of the user is facilitated; the environment information of the user's home and the health information of the user can be collected, so that the user can more comprehensively know the information of the environment where the user is and the physical condition of the user.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the technical principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. The utility model provides an gather intelligent recognition system of domestic information and user's emotion, includes the robot, be equipped with the operation arm on the robot, its characterized in that, the robot includes control module, environmental sensor module, health sensor module, power module, camera module, stereo set module and display module:
the control module is used for controlling the activity of the operation arm, acquiring environmental information and/or health information of a user through the environmental sensor module and the health sensor module, performing structured preprocessing on the acquired environmental information and/or the health information of the user, and transmitting the information after the structured preprocessing to the cloud platform;
the power supply module is in signal connection with the control module and is used for providing energy for the control module;
the camera module is in signal connection with the control module and is used for acquiring the head portrait information of the user according to the human eye imaging principle and transmitting the head portrait information to the control module;
the sound module is in signal connection with the control module and is used for collecting sound information of a user and transmitting the sound information to the control module;
and the display module is in signal connection with the control module and is used for displaying prompt information corresponding to the key words when the extracted key words are instructions for acquiring health information.
2. The system of claim 1, wherein the environmental sensor module comprises a temperature sensor, a humidity sensor, a smoke sensor or a light sensor for collecting environmental information.
3. The system of claim 1, wherein the health sensor module comprises a heart rate sensor, a blood pressure sensor or a blood sugar sensor for collecting health information.
4. The system of claim 1, wherein the facial expression information comprises smiling, worrying, anger, surprise and blankness.
5. The system according to claim 1, wherein the control module is further configured to control the display module and the sound module to perform corresponding operations according to the expression information of the user.
6. The system for intelligently identifying the collection of the household information and the emotion of the user as claimed in claim 1, wherein said robot body further comprises a wireless communication module for realizing the interaction between the household information collection and the emotion recognition equipment of the user and the cloud platform.
7. A method for collecting household information and user emotion by intelligent recognition equipment is characterized by comprising the following steps:
step A, judging whether an interaction request from a user is received, if so, starting a camera module and a sound module according to the interaction request to receive interaction information from the user;
step B, recognizing the sound information collected by the sound module as characters, extracting keywords of the characters, judging whether the keywords exist in the logic library or not, and otherwise, finishing the process;
step C, if the keyword exists in the logic library, judging whether the extracted keyword is an instruction for acquiring health information;
step D, displaying prompt information corresponding to the keyword to inform a user to use a health sensor module to execute corresponding health information acquisition operation, and displaying an acquired result after the acquisition is finished;
step E, processing the collected head portrait information to obtain depth information and a color image of the user, performing redundancy removal processing on the collected depth information and the collected color image to accurately identify the facial state of the user, and matching the facial state of the user with an expression library to determine expression information of the user;
step F, controlling the display module and the sound module to execute corresponding operations according to the expression information of the user;
g, controlling an environment sensor module to acquire environment information of a user, transmitting the environment information to a control module, and displaying push information from a cloud platform;
and H, carrying out structural preprocessing on the received environment information, and transmitting the information subjected to the structural preprocessing to the cloud platform.
8. The method for collecting home information and emotion of a user by using an intelligent recognition device as claimed in claim 7, wherein the step B comprises a step B1, if no interaction request is received from the user, sending a request for collecting environment information to the control module, sending a request for updating the expression library and the logic library in the control module to the cloud platform, and receiving push information from the cloud platform.
9. The method for collecting home information and emotion of a user by using an intelligent recognition device as claimed in claim 7, wherein the interactive information in step A comprises head portrait information and voice information of the user.
10. The method for collecting home information and emotion of a user by using an intelligent recognition device as claimed in claim 7, wherein the step D comprises a step D1, if the extracted keyword is not an instruction for collecting health information, acquiring corresponding content from a remote cloud platform according to the extracted keyword, and displaying the content to the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011327099.8A CN112633067A (en) | 2020-11-24 | 2020-11-24 | Intelligent system for collecting household information and user emotion and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011327099.8A CN112633067A (en) | 2020-11-24 | 2020-11-24 | Intelligent system for collecting household information and user emotion and identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112633067A true CN112633067A (en) | 2021-04-09 |
Family
ID=75303873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011327099.8A Withdrawn CN112633067A (en) | 2020-11-24 | 2020-11-24 | Intelligent system for collecting household information and user emotion and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112633067A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104102346A (en) * | 2014-07-01 | 2014-10-15 | 华中科技大学 | Household information acquisition and user emotion recognition equipment and working method thereof |
WO2018023517A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interactive recognition control system |
CN109101663A (en) * | 2018-09-18 | 2018-12-28 | 宁波众鑫网络科技股份有限公司 | A kind of robot conversational system Internet-based |
CN109308466A (en) * | 2018-09-18 | 2019-02-05 | 宁波众鑫网络科技股份有限公司 | The method that a kind of pair of interactive language carries out Emotion identification |
KR20190048593A (en) * | 2017-10-31 | 2019-05-09 | 부산대학교 산학협력단 | Method and System for Smart Mirror-based Privacy Healthcare Information Protection using cameras and microphones |
-
2020
- 2020-11-24 CN CN202011327099.8A patent/CN112633067A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104102346A (en) * | 2014-07-01 | 2014-10-15 | 华中科技大学 | Household information acquisition and user emotion recognition equipment and working method thereof |
WO2018023517A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Voice interactive recognition control system |
KR20190048593A (en) * | 2017-10-31 | 2019-05-09 | 부산대학교 산학협력단 | Method and System for Smart Mirror-based Privacy Healthcare Information Protection using cameras and microphones |
CN109101663A (en) * | 2018-09-18 | 2018-12-28 | 宁波众鑫网络科技股份有限公司 | A kind of robot conversational system Internet-based |
CN109308466A (en) * | 2018-09-18 | 2019-02-05 | 宁波众鑫网络科技股份有限公司 | The method that a kind of pair of interactive language carries out Emotion identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190188903A1 (en) | Method and apparatus for providing virtual companion to a user | |
CN105843381B (en) | Data processing method for realizing multi-modal interaction and multi-modal interaction system | |
CN104102346A (en) | Household information acquisition and user emotion recognition equipment and working method thereof | |
CN110598576B (en) | Sign language interaction method, device and computer medium | |
TWI430189B (en) | System, apparatus and method for message simulation | |
CN110390841A (en) | Interrogation training method, terminal and the system of digital patient | |
US20050113167A1 (en) | Physical feedback channel for entertainement or gaming environments | |
CN102301312A (en) | Portable Engine For Entertainment, Education, Or Communication | |
CN109101663A (en) | A kind of robot conversational system Internet-based | |
CN110517685A (en) | Audio recognition method, device, electronic equipment and storage medium | |
TW201113743A (en) | Method, electronic apparatus and computer program product for creating biologic feature data | |
CN109308466A (en) | The method that a kind of pair of interactive language carries out Emotion identification | |
CN106878390B (en) | Electronic pet interaction control method and device and wearable equipment | |
CN107004414A (en) | Message processing device, information processing method and program | |
CN111672098A (en) | Virtual object marking method and device, electronic equipment and storage medium | |
TWI665658B (en) | Smart robot | |
CN109828660B (en) | Method and device for controlling application operation based on augmented reality | |
Vu et al. | Emotion recognition based on human gesture and speech information using RT middleware | |
CN108595012A (en) | Visual interactive method and system based on visual human | |
CN108762512A (en) | Human-computer interaction device, method and system | |
CN108415561A (en) | Gesture interaction method based on visual human and system | |
CN202584048U (en) | Smart mouse based on DSP image location and voice recognition | |
CN112230777A (en) | Cognitive training system based on non-contact interaction | |
CN110413106B (en) | Augmented reality input method and system based on voice and gestures | |
CN112633067A (en) | Intelligent system for collecting household information and user emotion and identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210409 |
|
WW01 | Invention patent application withdrawn after publication |