CN107943272A - A kind of intelligent interactive system - Google Patents
A kind of intelligent interactive system Download PDFInfo
- Publication number
- CN107943272A CN107943272A CN201610892062.7A CN201610892062A CN107943272A CN 107943272 A CN107943272 A CN 107943272A CN 201610892062 A CN201610892062 A CN 201610892062A CN 107943272 A CN107943272 A CN 107943272A
- Authority
- CN
- China
- Prior art keywords
- user
- submodule
- module
- sent
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention is suitable for electronic technology field, provide a kind of intelligent interactive system, the system comprises client and background server, client analyzes the emotional state and health status of user according to user input data, human-computer interaction is carried out using corresponding Multimedia Combination mode and user according to emotional state, and give user feedback corresponding medical advice by human-computer interaction according to health status, background server is used to client is analyzed and handled by the user demand that human-computer interaction obtains.Technical scheme carries out human-computer interaction according to user emotion state using corresponding Multimedia Combination mode, avoid the mechanization of human-computer interaction, so that the functional diversities of interactive process, improve the intelligent experience of user, in combination with the health status of the emotional change analysis user of user, and according to the health status of user by multimedia human-computer interaction to the corresponding medical advice of user feedback, so as to improve the intelligent level of medical treatment detection and analysis.
Description
Technical field
The present invention relates to electronic technology field, more particularly to a kind of intelligent interactive system.
Background technology
With advances in technology and electronic information technology development, it is vigorously emerging as the intelligent industry of representative using robot science and technology
Rise, become an important symbol of modern science and technology innovation.Intelligent robot comes into home services industry, realizes long-range prison
Control, intelligent security guard, smart home and people accompany chat and provide the functions such as audiovisual entertainment.
But there are many deficiencies for existing intelligent robot:On the one hand, human-computer interaction function is stiff, often only provides
The push-botton operation of mechanization selects fixed service function;On the other hand, medical guidelines detection function can not be provided, also can not
Analysis and evaluation and the human-computer interaction of intelligence, intelligent deficiency are carried out according to testing result and the real-time emotion state of user.
The content of the invention
It is an object of the invention to provide a kind of intelligent interactive system, it is intended to solves the people of intelligent robot in the prior art
The problem of machine interactive function is stiff, the intelligence deficiency of medical treatment detection and analysis.
The present invention provides a kind of intelligent interactive system, including client and background server, the client with it is described
Background server wireless communication connects, wherein:
The client, for receiving user input data, the mood shape of user is analyzed according to the user input data
State and health status, human-computer interaction is carried out according to the emotional state using corresponding Multimedia Combination mode and the user,
And the corresponding medical advice of the user feedback is given by the human-computer interaction according to the health status;
The background server, for the client to be analyzed and located by the user demand that human-computer interaction obtains
Reason, and give handling result to the user by the client feedback.
The existing compared with prior art beneficial effect of the present invention is:By carrying out analyzing definite use to user input data
The emotional state and health status at family, corresponding Multimedia Combination mode and user are used into pedestrian according to the emotional state of user
Machine interacts, and avoids the mechanization of human-computer interaction so that the functional diversities of interactive process, improve the intelligence of user
Experience;Integrated medical detection function at the same time, with reference to the health status of the emotional change analysis user of user, and according to the strong of user
Health state by multimedia human-computer interaction to the corresponding medical advice of user feedback so that improve medical treatment detection and analysis intelligence
Energyization is horizontal.
Brief description of the drawings
Fig. 1 is a kind of structure diagram for intelligent interactive system that the embodiment of the present invention one provides;
Fig. 2 is a kind of structure diagram of intelligent interactive system provided by Embodiment 2 of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
It is described in detail below in conjunction with realization of the specific attached drawing to the present invention.
Embodiment one
Fig. 1 is a kind of structure diagram for intelligent interactive system that the embodiment of the present invention one provides, for convenience of description, only
Show and the relevant part of the embodiment of the present invention.A kind of exemplary intelligent interactive systems 100 of Fig. 1 include:Client 110 is with after
Platform server 120.Each function module describes in detail as follows:
(1) client 110
For receiving user input data, according to the emotional state and health status of the user input data analysis user,
Human-computer interaction is carried out using corresponding Multimedia Combination mode and user according to the emotional state of user, and according to the health of user
State gives user feedback corresponding medical advice by human-computer interaction.
Specifically, client 110 receive user input data include voice input by user, word, image/video and
The data such as action, client 110 are docked received data and are analyzed, and judge the emotional state and health status of user.
Client 110 is according to the emotional state of user, using suitable Multimedia Combination interactive mode and user into pedestrian
Machine interacts, such as the intonation for the voice that can be exported according to the change real-time transform of the mood of user to user and tone etc..
The user input data that client 110 receives further includes the Medical detection sample of user, and client 110 can dock
Received Medical detection sample carries out the detection and analysis of medical guidelines, such as the detection etc. of blood routine and routine urinalysis, according to inspection
Survey result and the emotional state of combination user judge the health status of user, and client 110 is used according to the health status of user
Suitable Multimedia Combination interactive mode is to the corresponding medical advice of user feedback.
(2) background server 120
For client 110 to be analyzed and handled by the user demand that human-computer interaction obtains, and by handling result
User is fed back to by client 110.
Specifically, the demand information of the user got by human-computer interaction is sent to background server by client 110
120, background server 120 handles demand information, and handling result is returned to client 110, and client 110 passes through
Multimedia interactive feeds back to user.
It was found from a kind of exemplary intelligent interactive systems of above-mentioned Fig. 1, on the one hand, by analyzing user input data
Determine the emotional state and health status of user, corresponding Multimedia Combination mode and user are used according to the emotional state of user
Human-computer interaction is carried out, avoids the mechanization of human-computer interaction so that the functional diversities of interactive process, improve user's
Intelligence experience;On the other hand, integrated medical detection function, with reference to the health status of the emotional change analysis user of user, and
According to the health status of user by multimedia human-computer interaction to the corresponding medical advice of user feedback, so as to improve medical inspection
The intelligent level surveyed and analyzed.
Embodiment two
Fig. 2 is a kind of structure diagram of intelligent interactive system provided by Embodiment 2 of the present invention, for convenience of description, only
Show and the relevant part of the embodiment of the present invention.A kind of exemplary intelligent interactive systems 200 of Fig. 2 include:Client 210 is with after
Platform server 220.Each function module describes in detail as follows:
(1) client 210
For receiving user input data, according to the emotional state and health status of the user input data analysis user,
Human-computer interaction is carried out using corresponding Multimedia Combination mode and user according to the emotional state of user, and according to the health of user
State gives user feedback corresponding medical advice by human-computer interaction.
Specifically, client 210 receive user input data include voice input by user, word, image/video and
The data such as action, client 210 are docked received data and are analyzed, and judge the emotional state and health status of user.
Client 210 is according to the emotional state of user, using suitable Multimedia Combination interactive mode and user into pedestrian
Machine interacts, such as the intonation for the voice that can be exported according to the change real-time transform of the mood of user to user and tone etc..
The user input data that client 210 receives further includes the Medical detection sample of user, and client 210 can dock
Received Medical detection sample carries out the detection and analysis of medical guidelines, such as the detection etc. of blood routine and routine urinalysis, according to inspection
Survey result and the emotional state of combination user judge the health status of user, and client 210 is used according to the health status of user
Suitable Multimedia Combination interactive mode is to the corresponding medical advice of user feedback.
Further, client 210 includes control module 211, multimedia interactive module 212, fingerprint module 213, medical treatment
Detection module 214 and wireless module 215, the detailed description of each function module are as follows:
A1) multimedia interactive module 212
For completing the interaction of multimedia information with user.
Specifically, multimedia interactive module 212 is completed using multimedia modes such as voice, image, video, word and actions
With the interaction of multimedia information of user.
Further, multimedia interactive module includes vision submodule 2121, voice submodule 2122 and touches display
Module 2123, the function detailed description of each submodule are as follows:
A1-1) vision submodule 2121
For obtaining the video data of user, and video data is sent to control module 211 and is handled.
Specifically, vision submodule 2121 can obtain the video data of user by video camera, which can be with
Include the vedio data of user itself, the video data of user surrounding environment can also be included.
Further, vision submodule 2121 can include holder unit 501 and video acquisition unit 502, wherein, holder
Unit 501 is used for the shooting angle of the instruction adjustment video according to control module 211, and video acquisition unit 502 is regarded for acquisition
Frequency evidence.
A1-2) voice submodule 2122
For obtaining the voice data of user, audio data transmitting is sent to control module 211 and is handled, and receives control
The response data that molding block 211 is sent, completes the interactive voice with user.
Specifically, voice submodule 2122 can by microphone obtain user voice data, by loudspeaker to
Family exports voice, completes the interactive voice with user.
Further, voice submodule 2122 can include voice-output unit 601 and voice acquisition unit 602, wherein,
Voice-output unit 601 is used to export sound feedback information according to the instruction of control module 211, and voice acquisition unit 602 is used for
Obtain voice data.
A1-3 display sub-module 2123) is touched
The information inputted for obtaining user by touch-screen, transmits this information to control module 211 and is handled, and
And multimedia interactive information is provided a user according to the handling result that control module 211 is sent.
Specifically, user can input information by the function menu of touch-screen, and touching display sub-module 2123 will obtain
To the input information of user be sent to control module 211 and handle, handling result is returned to touch display by control module 211
After module 2123, touch display sub-module 2123 and provide a user on the touchscreen more by word, the mode such as image and video
Media interactive information.
A2) fingerprint module 213
For obtaining the finger print data of user, and the finger print data is sent to control module 211 and is handled.
A3) medical detection module 214
For completing medical detection project, and testing result is sent to control module 211 and is handled.
Specifically, medical detection module 214 has provided a user the interface of medicine detection, and user can be according to operation indicating
Selection needs the detection project carried out, and detection sample is put into test side, and medical detection module 214 is obtained from test side and detected
Sample and after completing medical detection project, is sent to control module 211 by testing result and is handled.
A4) control module 211
For controlling multimedia interactive module 212 to obtain the multimedia messages of user, multimedia interactive module 212 is sent
Multimedia messages analyzed and handled, and handling result is fed back into multimedia interactive module 212, according to multimedia interactive
The finger print data that the multimedia messages and fingerprint module 213 that module 212 is sent are sent determines the authority of user, is handed over according to multimedia
The testing result that the multimedia messages and medical detection module 214 that mutual module 212 is sent are sent, analyzes the health condition of user simultaneously
Corresponding medical advice is provided.
Specifically, control module 211 control vision submodule 2121 and voice submodule 2122 catch user video and
Sound, and control the holder unit 501 in vision submodule 2121 to adjust the shooting angle of video, track the movement of user.
The video data for the user that control module 211 is got according to vision submodule 2121, can be to user into pedestrian
Face identification, Expression Recognition and action recognition, and control the touch adjustment of display sub-module 2123 multimedia man-machine according to recognition result
Interactive ways of presentation, and the voice-output unit 601 of control voice submodule 2122 adjusts the tone and intonation of voice messaging.
For example, vision submodule 2121 is children to the recognition result of user, and children are currently at the emotional state of happiness, then touch
The cartoon mode full of children's interesting, and voice can be adjusted to by touching display sub-module 2123 and the image of user's progress human-computer interaction
Submodule 2122 can export the voice of simulation children's tone and intonation.
The testing result for the medical detection project that control module 211 sends medical detection module 214 is analyzed, and is obtained
The physical condition information of user, and recording and tracking is carried out to the physical condition of user, and combine vision submodule 2121
The expression information of the user got, provides suitable medical treatment suggestion and reminder immediately.For example, when the analysis inspection of control module 211
It is beyond arm's length standard value scope to survey result, and vision submodule 2121 gets the expression shape that user is currently at pain
State, then control module 211 can control touch display sub-module 2123 issue the user with time the prompt text seen a doctor as early as possible or
Image information, or the voice messaging seen a doctor as early as possible to user's output by voice submodule 2122.
The finger print data that control module 211 is sent according to fingerprint module 213, and combine what vision submodule 2121 was got
User's facial feature information, confirms the authority of active user, and user can be by way of pre-entering fingerprint and face image
Registered and corresponding authority is set, control module 211 provides the corresponding grade of service according to the authority of active user, right
The user for not having authority refuses offer service, so as to prevent that the data of preservation and information from illegally being obtained in the case of without permission
Take or reveal, improve Information Security.
Further, control module 211 includes visual analysis submodule 2111, speech analysis submodule 2112, fingerprint knowledge
Small pin for the case module 2113 and control response submodule 2114, the function detailed description of each submodule are as follows:
A4-1) visual analysis submodule 2111
For the video data obtained according to institute's vision submodule 2121, the emotional state of user is analyzed, and analysis is tied
Fruit is sent to control response submodule 2114.
Specifically, visual analysis submodule 2111 controls the holder unit 501 of vision submodule 2121 to adjust shooting angle,
Video data is obtained by the video acquisition unit 502 of vision submodule 2121, the face to user is realized according to video data
Identification and tracking, and analysis to the expression of user and action identify, analyzes the emotional state of user, and by analysis result
It is sent to control response submodule 2114.
Further, visual analysis submodule 2111 can include face identification unit 701, Expression Recognition unit 702, move
Make recognition unit 703 and video control unit 704, the function detailed description of each unit is as follows:
B1) face identification unit 701
For controlling vision submodule 2121 to carry out recognition of face to user, and the face data recognized is sent to control
System response submodule 2113 is handled.
B2) Expression Recognition unit 702
Face data for the user to recognizing is analyzed, and obtains the expression information of user, and the expression is believed
Breath is sent to control response submodule 2114 and is handled.
Specifically, Expression Recognition unit 702 is judged belonging to the expression of user by analyzing the face data of user
Classification, default expression classification can be divided into it is glad, surprised, angry, detest, fear, sad and neutral etc. main expression, but
It is not limited to this, specific expression classification can be configured according to practical application scene, be not limited herein.
B3) action recognition unit 703
For the video data got according to vision submodule 2121, the action of user is analyzed, and action message is sent out
Control response submodule 2114 is given to be handled.
Specifically, action recognition unit 703 judges the classification belonging to the action of user by the analysis to video data,
Default action classification can be divided into standing, fall and wave, but be not limited to this, and specific action classification can be according to reality
Border application scenarios are configured, and are not limited herein.
B4) video control unit 704
For the handling result returned according to control response submodule 2114, control vision submodule 2121 carries out vision ginseng
The adjustment of number and positional information.
A4-2) speech analysis submodule 2112
For the voice data obtained according to voice submodule 2122, the demand information of user is identified, and by recognition result
It is sent to control response submodule 2114.
Further, speech analysis submodule 2112 can include auditory localization unit 801 and voice recognition unit 802,
The function of each unit describes in detail as follows:
C1) auditory localization unit 801
For the voice data obtained according to voice submodule 2122, direction and position that voice data produces are judged, and
It will determine that result is sent to control response submodule 2114 and is handled;
C2) voice recognition unit 802
Voice data for being obtained to voice submodule 2122 carries out speech recognition, and recognition result is sent to control
Response submodule 2114 is handled.
A4-3) fingerprint recognition submodule 2113
Finger print data for being obtained to fingerprint module 213 recognizes, and identification result is sent to control response
Module 2114.
Specifically, the finger print data that fingerprint recognition submodule 2113 collects fingerprint module 213, is used with the registration to prestore
The legal finger print data at family is compared and recognizes, and judges whether the fingerprint of active user's input is legal, and will determine that result is sent out
Give control response submodule 2114.
A4-4) control response submodule 2114
For receiving visual analysis submodule 2111, speech analysis submodule 2112, fingerprint recognition submodule 2113, touch
The data that display sub-module 2123 and medical detection module 214 are sent, the demand that speech analysis submodule 2112 is identified are believed
Breath is sent to background server 220 and is handled, and docks the processing that received data are recorded, classified, identified and analyzed,
Controlled according to handling result and touch display sub-module 2123, vision submodule 2121 and the completion of voice submodule 2122 with user's
Interaction of multimedia information.
Further, control response submodule 2114 can include interactively entering analytic unit 901, behavior feedback processing list
Member 902 and interaction response output unit 903, the function detailed description of each unit are as follows:
D1 analytic unit 901) is interactively entered
For aobvious to visual analysis submodule 2111, speech analysis submodule 2112, fingerprint recognition submodule 2113, touch
Show that the data that submodule 2123 and medical detection module 214 are sent are analyzed and handled, and handling result is sent to behavior
Feedback processing unit 902.
Specifically, interactively entering the data that analytic unit 901 receives includes expression data, facial recognition data, fingerprint
Data, medical treatment detection data, voice data and user's request data of touch-screen input etc., interactively enter analytic unit 901
Analyzed by docking these received data, and classification and Permission Levels of user etc. are judged according to analysis result, and will
Judging result is sent to behavior feedback processing unit 902 and is handled.
D2) behavior feedback processing unit 902
For according to the handling result for interactively entering the transmission of analytic unit 901, organizing the corresponding interaction for feeding back to user
Information, which includes feedback behavior, feedback content and feedback ways of presentation, and interactive information is sent to interaction response
Output unit 903.
Specifically, behavior feedback processing unit 902 is according to the handling result and backstage for interactively entering the transmission of analytic unit 901
The handling result to user demand that server 220 returns, organizes corresponding interactive information, including determine according to the classification of user
Feedback behavior and feedback ways of presentation, determine feedback content etc., and the interactive information is sent to according to the Permission Levels of user
Interaction response output unit 903 is exported.
D3) interaction response output unit 903
For the interactive information sent according to behavior feedback processing unit 902, by the feedback behavior in the interactive information and
Feedback content feeds back to user according to feedback ways of presentation by touching display sub-module 2123 and voice submodule 2122.
A5) wireless module 215
For providing the interface of wireless network connection.
Specifically, wireless module 215 can integrate 4G networks, Wireless Fidelity (WIreless-Fidelity, WIFI) and indigo plant
The wireless connecting functions such as tooth.
(2) background server 220
For client 210 to be analyzed and handled by the user demand that human-computer interaction obtains, and by handling result
User is fed back to by client 210.
Specifically, the demand information of the user identified by speech analysis submodule 2112 is sent to by client 210
Background server 220, background server 220 handle demand information, and handling result is returned to client 210, visitor
Family end 210 feeds back to user by multimedia interactive.
Further, after client 210 can also obtain the voice data of user by voice submodule 2122, directly will
Voice request is sent to background server 220 by wireless module 215, and background server 220 completes the identification to customer demand
And processing, and handling result is returned into client 210.
Further, which further includes monitoring server 230 and backup server 240, and client 210 is divided
It is not connected with monitoring server 230 and 240 wireless communication of backup server, each function module describes in detail as follows:
(3) monitoring server 230
For monitoring the operating status of the intelligent interactive system.
Specifically, the monitoring that monitoring server 230 passes through the operating status to whole intelligent interactive system, it is ensured that system
Normal operation, and to the timely early warning of abnormal conditions of appearance.
(4) backup server 240
Backup for the interaction data that the human-computer interaction generation is carried out to client 210.
Specifically, backup server 240 avoids the data because being preserved in client 210 by the backup to interaction data
It is unexpected to lose the loss caused to user.
It was found from a kind of exemplary intelligent interactive systems of above-mentioned Fig. 2, on the one hand, by the video of user, audio, text
The analysis of the multi-medium datas such as word, image and action, completes recognition of face, Expression Recognition and action recognition to user, according to
Recognition result determines the emotional state of user, and according to the emotional state real-time transform and the feelings of user's progress human-computer interaction of user
Sensing mode, personalized Multimedia Combination mode and user are used into pedestrian for the different emotional states of different user and user
Machine interacts, and avoids the mechanization of human-computer interaction so that the functional diversities of interactive process, improve the intelligence of user
Experience;On the other hand, integrated medical detection function, realizes the tracking to the health condition of user, and the mood for combining user becomes
Change the health status of analysis user, it is medical accordingly to user feedback by multimedia human-computer interaction according to the health status of user
It is recommended that so as to it improve the intelligent level of medical treatment detection and analysis;Meanwhile the side being combined by fingerprint recognition and recognition of face
Formula determines the authority of user, improves the security of user data..
It should be noted that each embodiment in this specification is described by the way of progressive, each embodiment
What is stressed is all the difference with other embodiment, between each embodiment same or similar part mutually referring to
.
It is worth noting that, in above device embodiment, included modules are simply drawn according to function logic
Point, but above-mentioned division is not limited to, as long as corresponding function can be realized;In addition, each function module is specific
Title is also only to facilitate mutually distinguish, the protection domain being not intended to limit the invention.
Can it will appreciated by the skilled person that realizing that all or part of step in the various embodiments described above method is
To instruct relevant hardware to complete by program, corresponding program can be stored in a computer read/write memory medium
In, the storage medium, such as ROM/RAM, disk or CD.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement made within refreshing and principle etc., should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of intelligent interactive system, it is characterised in that the intelligent interactive system includes client and background server, described
Client is connected with the background server wireless communication, wherein:
The client, for receiving user input data, according to the user input data analyze user emotional state and
Health status, human-computer interaction, and root are carried out according to the emotional state using corresponding Multimedia Combination mode and the user
According to the health status the corresponding medical advice of the user feedback is given by the human-computer interaction;
The background server, for the client to be analyzed and handled by the user demand that human-computer interaction obtains,
And give handling result to the user by the client feedback.
A kind of 2. intelligent interactive system according to claim 1, it is characterised in that the client include control module,
Multimedia interactive module, fingerprint module, medical detection module and wireless module, wherein:
The multimedia interactive module, for completing the interaction of multimedia information with the user;
The fingerprint module, the control mould is sent to for obtaining the finger print data of the user, and by the finger print data
Block is handled;
The medical treatment detection module, for completing medical detection project, and is sent to the control module by testing result and carries out
Processing;
The control module, for controlling the multimedia interactive module to obtain the multimedia messages of the user, to described more
The multimedia messages that media interactive module is sent are analyzed and handled, and handling result is fed back to the multimedia and is handed over
Mutual module, the finger print data sent according to the multimedia messages and the fingerprint module determine the authority of the user,
The testing result sent according to the multimedia messages and the medical detection module, analyzes the health condition of the user and carries
For corresponding medical advice;
The wireless module, for providing the interface of wireless network connection.
3. a kind of intelligent interactive system according to claim 2, it is characterised in that the multimedia interactive module includes regarding
Feel submodule, voice submodule and touch display sub-module, wherein:
The vision submodule, the control is sent to for obtaining the video data of the user, and by the video data
Module is handled;
The voice submodule, for obtaining the voice data of the user, the control mould is sent to by the audio data transmitting
Block is handled, and receives the response data that the control module is sent, and completes the interactive voice with the user;
The touch display sub-module, the information inputted for obtaining the user by touch-screen, described information is sent to
The control module is handled, and is provided multimedia to the user according to the handling result that the control module is sent and handed over
Mutual information.
4. a kind of intelligent interactive system according to claim 3, it is characterised in that the control module includes visual analysis
Submodule, speech analysis submodule, fingerprint recognition submodule and control response submodule, wherein:
The visual analysis submodule, for the video data obtained according to the vision submodule, analyzes the user
Emotional state, and analysis result is sent to the control response submodule;
The speech analysis submodule, for the voice data obtained according to the voice submodule, identifies the need of user
Information is sought, and recognition result is sent to the control response submodule;
The fingerprint recognition submodule, the finger print data for being obtained to the fingerprint module recognize, and will identification
As a result it is sent to the control response submodule;
The control response submodule, for receiving the visual analysis submodule, the speech analysis submodule, the fingerprint
Identify submodule, the data for touching display sub-module and the medical detection module and sending, the demand information is sent
Handled to the background server, and to the processing that the data are recorded, classified, identified and analyzed, according to processing
Display sub-module, the vision submodule and voice submodule completion and more matchmakers of the user are touched described in output control
Body information exchange.
5. a kind of intelligent interactive system according to claim 4, it is characterised in that the visual analysis submodule includes people
Face recognition unit, Expression Recognition unit, action recognition unit and video control unit, wherein:
The face identification unit, for controlling the vision submodule to carry out recognition of face to the user, and will recognize
Face data be sent to the control response submodule and handled;
The Expression Recognition unit, for analyzing described facial data, obtains the expression information of the user, and by institute
State expression information and be sent to the control response submodule and handled;
The action recognition unit, for the video data got according to the vision submodule, analyzes the user
Action, and action message is sent to the control response submodule and is handled;
The video control unit, for the handling result returned according to the control response submodule, controls vision
Module carries out the adjustment of vision parameter and positional information.
6. a kind of intelligent interactive system according to claim 4, it is characterised in that the speech analysis submodule includes sound
Source positioning unit and voice recognition unit, wherein:
The auditory localization unit, for the voice data obtained according to the voice submodule, judges the sound number
According to the direction and position of generation, and it will determine that result is sent to the control response submodule and is handled;
The voice recognition unit, the voice data for being obtained to the voice submodule carry out speech recognition, and will
Recognition result is sent to the control response submodule and is handled.
7. a kind of intelligent interactive system according to claim 4, it is characterised in that the control response submodule includes handing over
Mutually input analytic unit, behavior feedback processing unit and interaction response output unit, wherein:
It is described to interactively enter analytic unit, for the visual analysis submodule, the speech analysis submodule, the fingerprint
Identify submodule, the data that the touch display sub-module and the medical detection module are sent are analyzed and handled, and will
Handling result is sent to the behavior feedback processing unit;
The behavior feedback processing unit, for interactively entering the handling result of analytic unit transmission, tissue according to
The corresponding interactive information for feeding back to the user, the interactive information include feedback behavior, feedback content and the feedback side of showing
Formula, and the interactive information is sent to the interaction response output unit;
The interaction response output unit, for the interactive information sent according to the behavior feedback processing unit, by institute
Feedback behavior and the feedback content are stated according to the feedback ways of presentation, passes through the touch display sub-module and the voice
Submodule feeds back to the user.
8. a kind of intelligent interactive system according to claim 3, it is characterised in that the vision submodule includes holder list
Member and video acquisition unit, wherein:
The holder unit, the shooting angle for the instruction adjustment video according to the control module;
The video acquisition unit, for obtaining the video data.
9. a kind of intelligent interactive system according to claim 3, it is characterised in that it is defeated that the voice submodule includes voice
Go out unit and voice acquisition unit, wherein:
The voice-output unit, for exporting sound feedback information according to the instruction of the control module;
The voice acquisition unit, for obtaining the voice data.
10. according to a kind of intelligent interactive system of claim 1 to 9 any one of them, it is characterised in that the intelligent interaction system
System further includes monitoring server and backup server, the client respectively with the monitoring server and the backup services
Device wireless communication connects, wherein:
The monitoring server, for monitoring the operating status of the intelligent interactive system;
The backup server, the backup of the interaction data for carrying out the human-computer interaction generation to the client.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610892062.7A CN107943272A (en) | 2016-10-12 | 2016-10-12 | A kind of intelligent interactive system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610892062.7A CN107943272A (en) | 2016-10-12 | 2016-10-12 | A kind of intelligent interactive system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107943272A true CN107943272A (en) | 2018-04-20 |
Family
ID=61928874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610892062.7A Pending CN107943272A (en) | 2016-10-12 | 2016-10-12 | A kind of intelligent interactive system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107943272A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347778A (en) * | 2019-05-28 | 2019-10-18 | 成都美美臣科技有限公司 | One e-commerce website intelligent robot customer service |
CN110471534A (en) * | 2019-08-23 | 2019-11-19 | 靖江市人民医院 | Information processing method and tele-medicine management system based on Emotion identification |
CN110853765A (en) * | 2019-11-05 | 2020-02-28 | 江苏安防科技有限公司 | Intelligent human-computer interaction system based on environment visibility |
CN111178922A (en) * | 2018-11-09 | 2020-05-19 | 阿里巴巴集团控股有限公司 | Service providing method, virtual customer service generating method, device and electronic equipment |
CN111475847A (en) * | 2020-04-30 | 2020-07-31 | 马少才 | Medical big data processing method |
CN111914288A (en) * | 2020-07-09 | 2020-11-10 | 上海红阵信息科技有限公司 | Multi-service analysis processing management system based on biological characteristics |
CN112037901A (en) * | 2020-06-14 | 2020-12-04 | 深圳市前海澳威智控科技有限责任公司 | Intelligent pain management system and management method |
CN113555012A (en) * | 2020-04-22 | 2021-10-26 | 深圳市前海高新国际医疗管理有限公司 | Artificial intelligent voice interaction recognition system and use method thereof |
CN113954086A (en) * | 2021-09-09 | 2022-01-21 | 南方医科大学南方医院 | Medical patrol robot and management system thereof |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102780651A (en) * | 2012-07-21 | 2012-11-14 | 上海量明科技发展有限公司 | Method for inserting emotion data in instant messaging messages, client and system |
CN104182619A (en) * | 2014-08-05 | 2014-12-03 | 上海市精神卫生中心 | Intelligent terminal based system and method for realizing acquiring and processing of emotional characteristic parameters |
CN104320710A (en) * | 2014-07-18 | 2015-01-28 | 冠捷显示科技(厦门)有限公司 | Exclusive interactive system customized aiming at different user groups and interactive method |
CN104545951A (en) * | 2015-01-09 | 2015-04-29 | 天津大学 | Body state monitoring platform based on functional near-infrared spectroscopy and motion detection |
CN105082150A (en) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | Robot man-machine interaction method based on user mood and intension recognition |
CN105578241A (en) * | 2016-02-03 | 2016-05-11 | 深圳市彩易生活科技有限公司 | Information interaction method and system as well as associated apparatuses |
CN105824970A (en) * | 2016-04-12 | 2016-08-03 | 华南师范大学 | Robot interaction method and system based on big data knowledge base and user feedback |
WO2016141349A1 (en) * | 2015-03-04 | 2016-09-09 | PogoTec, Inc. | Wireless power base unit and a system and method for body-worn repeater charging of wearable electronic devices |
-
2016
- 2016-10-12 CN CN201610892062.7A patent/CN107943272A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102780651A (en) * | 2012-07-21 | 2012-11-14 | 上海量明科技发展有限公司 | Method for inserting emotion data in instant messaging messages, client and system |
CN104320710A (en) * | 2014-07-18 | 2015-01-28 | 冠捷显示科技(厦门)有限公司 | Exclusive interactive system customized aiming at different user groups and interactive method |
CN104182619A (en) * | 2014-08-05 | 2014-12-03 | 上海市精神卫生中心 | Intelligent terminal based system and method for realizing acquiring and processing of emotional characteristic parameters |
CN104545951A (en) * | 2015-01-09 | 2015-04-29 | 天津大学 | Body state monitoring platform based on functional near-infrared spectroscopy and motion detection |
WO2016141349A1 (en) * | 2015-03-04 | 2016-09-09 | PogoTec, Inc. | Wireless power base unit and a system and method for body-worn repeater charging of wearable electronic devices |
CN105082150A (en) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | Robot man-machine interaction method based on user mood and intension recognition |
CN105578241A (en) * | 2016-02-03 | 2016-05-11 | 深圳市彩易生活科技有限公司 | Information interaction method and system as well as associated apparatuses |
CN105824970A (en) * | 2016-04-12 | 2016-08-03 | 华南师范大学 | Robot interaction method and system based on big data knowledge base and user feedback |
Non-Patent Citations (2)
Title |
---|
(美)詹姆斯•马丁: "《2012来临我们如何自救》", 31 December 2010 * |
谷学静,石琳,郭宇承: "《交互设计中的人工情感》", 31 December 2015 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178922A (en) * | 2018-11-09 | 2020-05-19 | 阿里巴巴集团控股有限公司 | Service providing method, virtual customer service generating method, device and electronic equipment |
CN110347778A (en) * | 2019-05-28 | 2019-10-18 | 成都美美臣科技有限公司 | One e-commerce website intelligent robot customer service |
CN110471534A (en) * | 2019-08-23 | 2019-11-19 | 靖江市人民医院 | Information processing method and tele-medicine management system based on Emotion identification |
CN110471534B (en) * | 2019-08-23 | 2022-11-04 | 靖江市人民医院 | Information processing method based on emotion recognition and remote medical management system |
CN110853765A (en) * | 2019-11-05 | 2020-02-28 | 江苏安防科技有限公司 | Intelligent human-computer interaction system based on environment visibility |
CN113555012A (en) * | 2020-04-22 | 2021-10-26 | 深圳市前海高新国际医疗管理有限公司 | Artificial intelligent voice interaction recognition system and use method thereof |
CN111475847A (en) * | 2020-04-30 | 2020-07-31 | 马少才 | Medical big data processing method |
CN112037901A (en) * | 2020-06-14 | 2020-12-04 | 深圳市前海澳威智控科技有限责任公司 | Intelligent pain management system and management method |
CN111914288A (en) * | 2020-07-09 | 2020-11-10 | 上海红阵信息科技有限公司 | Multi-service analysis processing management system based on biological characteristics |
CN113954086A (en) * | 2021-09-09 | 2022-01-21 | 南方医科大学南方医院 | Medical patrol robot and management system thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107943272A (en) | A kind of intelligent interactive system | |
US11561616B2 (en) | Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio | |
Jaimes et al. | Multimodal human–computer interaction: A survey | |
Pantic et al. | Toward an affect-sensitive multimodal human-computer interaction | |
JP6125670B2 (en) | Brain-computer interface (BCI) system based on temporal and spatial patterns of collected biophysical signals | |
JP4481663B2 (en) | Motion recognition device, motion recognition method, device control device, and computer program | |
CN108537702A (en) | Foreign language teaching evaluation information generation method and device | |
US20140234815A1 (en) | Apparatus and method for emotion interaction based on biological signals | |
KR102092931B1 (en) | Method for eye-tracking and user terminal for executing the same | |
US20070074114A1 (en) | Automated dialogue interface | |
CN109508687A (en) | Man-machine interaction control method, device, storage medium and smart machine | |
CN106528859A (en) | Data pushing system and method | |
WO2019051082A1 (en) | Systems, methods and devices for gesture recognition | |
CN105075278A (en) | Providing recommendations based upon environmental sensing | |
US11093044B2 (en) | Method for detecting input using audio signal, and electronic device therefor | |
Meudt et al. | Going further in affective computing: how emotion recognition can improve adaptive user interaction | |
CN110446996A (en) | A kind of control method, terminal and system | |
Medjden et al. | Adaptive user interface design and analysis using emotion recognition through facial expressions and body posture from an RGB-D sensor | |
CN105536264A (en) | User-interaction toy and interaction method of the toy | |
CN104615231A (en) | Determination method for input information, and equipment | |
CN117668763A (en) | Digital human all-in-one machine based on multiple modes and multiple mode perception and identification method thereof | |
Wang | Research on the Construction of Human‐Computer Interaction System Based on a Machine Learning Algorithm | |
KR20210063698A (en) | Electronic device and method for controlling the same, and storage medium | |
Henriques et al. | Emotionally-aware multimodal interfaces: Preliminary work on a generic affective modality | |
CN115309882A (en) | Interactive information generation method, system and storage medium based on multi-modal characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180420 |
|
RJ01 | Rejection of invention patent application after publication |