CN211466402U - Service robot - Google Patents
Service robot Download PDFInfo
- Publication number
- CN211466402U CN211466402U CN201920911739.6U CN201920911739U CN211466402U CN 211466402 U CN211466402 U CN 211466402U CN 201920911739 U CN201920911739 U CN 201920911739U CN 211466402 U CN211466402 U CN 211466402U
- Authority
- CN
- China
- Prior art keywords
- service
- service terminal
- user
- robot
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a service robot, comprising: the first service terminal is used for voice interaction with a user; the second service terminal is used for storing the application scene knowledge database and interacting with the user image-text interface; the first service terminal is in network communication connection with the second service terminal, and receives a control instruction of the second service terminal to output voice. The service robot provides the information processing function of the robot and the voice interaction of a user through the first service terminal, and the service robot provides the service processing function of an application scene through the second service terminal; the service robot of the invention is beneficial to reducing the workload of research personnel and can reduce the development cost by the technical scheme of separating the service processing function of the application scene from the information processing function of the robot.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a service robot suitable for different application scenes and a man-machine interaction method of the service robot.
Background
With the development of artificial intelligence technology, intelligent service robots are widely used in various service places, such as: market, hotel, tourist attraction, library, talent market, hospital, etc. The user can obtain the required service through the interaction with the robot, and convenience and fun are brought to the life of people.
The existing service robot is usually used in a specific application scene, and the service processing function of the application scene and the information processing function of the robot are integrated in the same module, so that the service robot cannot be applied to other application scenes due to the program development mode of the robot, research and development personnel need to perform program design and system architecture again, the development process is complex, the research and development period is long, the research and development cost of enterprises is increased, and the market competitiveness of the enterprises is reduced.
In the research and development process of robot production enterprises, the system architecture of the service robot needs to be improved, so that the service robot can be suitable for different application scenes.
Disclosure of Invention
The invention aims to provide a service robot which is convenient and fast and is suitable for various application scenes.
In order to solve the technical problems, the invention provides the following technical scheme: a service robot, comprising: the first service terminal is used for voice interaction with a user; the second service terminal is used for storing the application scene knowledge database and interacting with the user image-text interface; the first service terminal is in network communication connection with the second service terminal, and receives a control instruction of the second service terminal to output voice.
Compared with the prior art, the invention has the following beneficial effects: the service robot provides the information processing function of the robot and the voice interaction of a user through the first service terminal, and the service robot provides the service processing function of an application scene through the second service terminal; when the service robot is used in other application scenes, only the application scene knowledge database of the second service terminal needs to be replaced so as to adapt to application in different application scenes; the second service terminal can also realize interaction with the user through the image-text form, so that the use experience of the user is improved; the first service terminal and the second service terminal are independently arranged and are in communication connection through a network so as to transmit control instructions and information interaction; meanwhile, the first service terminal is controlled by the second service terminal so as to meet the requirement of voice output of the service robot in different application scenes.
Preferably, the first service terminal is provided with an infrared sensor for sensing a user, a microphone for collecting voice information of the user, a speaker for playing voice, and a first information processing module respectively connected with the infrared sensor, the microphone and the speaker; the second service terminal is provided with a touch display screen, a data storage unit for storing a knowledge database, and a second information processing module respectively connected with the touch display screen and the data storage unit; the first information processing module and the second information processing module are both provided with communication modules and connected to the same wifi module.
Preferably, the first service terminal is located at an upper end of the service robot as a head of the service robot.
Preferably, the first service terminal is provided with a display for displaying facial expressions, and the display is in signal connection with the first information processing module.
Preferably, the second service terminal is a touch screen all-in-one machine.
Preferably, the knowledge database is recruitment data, scenic spot tourism data, medical diagnosis guide data, market shopping guide data or hotel introduction data.
Drawings
FIG. 1 is a perspective view of a service robot according to the present invention;
FIG. 2 is a schematic structural diagram of a service robot according to the present invention;
FIG. 3 is a flowchart of a human-computer interaction method of the service robot according to the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention is provided in conjunction with the accompanying drawings, and is not intended to limit the scope of the invention. The terms "front", "back", "left", "right", "upper", "lower", "top", "bottom", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention. The terms "first" and "second" are used merely to simplify the description of the words for distinguishing between similar objects and are not to be construed as specifying a sequential relationship between sequences.
Referring to fig. 1, the present embodiment provides a service robot including a first service terminal 1 and a second service terminal 2. The service robot provides an information processing function of the robot itself for voice interaction with a user through the first service terminal 1, and a service processing function of an application scenario through the second service terminal 2. The second service terminal 2 stores a corresponding application scene knowledge database, which is beneficial for the service robot to provide service for the user in the current application scene. When the service robot is used in other application scenes, only the application scene knowledge database of the second service terminal 2 needs to be replaced, and research personnel do not need to perform program design and system architecture again to adapt to application in different application scenes. The first service terminal 1 and the second service terminal 2 are independently arranged and are in communication connection through a network so as to transmit control instructions and information interaction; meanwhile, the first service terminal 1 is controlled by the second service terminal 2, and meets the requirement of voice output of the service robot in different application scenes by receiving a control instruction of the second service terminal 2. The service robot of the invention is beneficial to reducing the workload of research personnel and can reduce the development cost by the technical scheme of separating the service processing function of the application scene from the information processing function of the robot.
Referring to fig. 1 and 2, in the embodiment, the first service terminal 1 is located at the upper end of the service robot and is provided with a display for displaying facial expressions. The first service terminal 1 serves as the head of the service robot, and the human face expression is displayed through the display, so that the interaction between a user and the robot can be enhanced, and the user experience is improved.
The first service terminal 1 is provided with an infrared sensor for sensing a user, a microphone for collecting voice information of the user, a speaker for playing voice, a camera for capturing facial features of the user, and a first information processing module respectively connected with the infrared sensor, the microphone, the speaker, the camera and the display. The first information processing module is provided with a voice recognition module, a voice synthesis module, a face recognition module, an expression playing module and a communication module.
When the service robot waits for the user to wake up, when the user approaches the service robot, for example: the relative distance is 0-60 cm, the infrared sensor senses that a user approaches, and can send a wake-up instruction to be transmitted to the second service terminal 2 through the first service terminal 1, so that the service robot enters a working state. The microphone collects the speaking content of the user and converts the speaking content into characters through the voice recognition module so that the first information processing module can acquire the user consultation information, and the first information processing module transmits the user consultation information to the second service terminal 2. The voice synthesis module is used for converting a communication instruction transmitted to the first service terminal 1 by the second service terminal 2 and playing the communication instruction through the loudspeaker. The man-machine voice interaction function between the user and the robot is realized through the matching of the microphone and the loudspeaker thereof. The microphones are arranged on the first service terminal 1 in an array form to improve the voice collecting performance of the microphones. The camera carries out image processing on the captured facial features of the user through the face recognition module, can sense the face emotion of the user, and enhances communication between the user and the robot. The expression playing module is in signal connection with the display, and the expression playing module can provide appropriate expressions for the display according to the man-machine conversation communication state, so that the user experience is improved.
In this embodiment, the second service terminal 2 is provided with a touch display screen, a data storage unit for storing a knowledge database, and a second information processing module respectively connected to the touch display screen and the data storage unit. The touch display screen is positioned at the trunk part of the service robot so as to provide a proper working height and facilitate the operation of a user. The second service terminal 2 interacts with the user image-text interface through the touch display screen, a user can browse relevant image-text information displayed by the touch display screen and can perform corresponding options by clicking the screen, and the second service terminal 2 interacts with the user through an image-text form, so that the use experience of the user is further improved. The knowledge database adopts recruitment data, scenic spot tourism data, medical diagnosis guide data, market shopping guide data or hotel introduction data. When the service robot is applied to different scenes, the data storage unit sets the corresponding knowledge database so that the service robot can be applied to different use scenes. The first information processing module and the second information processing module are both provided with communication modules, and the two communication modules are connected to the same wifi module to realize information interaction between the first information processing module and the second information processing module. When the second service terminal 2 needs to replace the application scene knowledge database, the service robot is connected to the background management platform through the wifi module so that a worker can adjust the knowledge database in time. Certainly, in other embodiments, the first service terminal 1 opens a wifi hotspot to implement network connection with the first service terminal 1.
In this embodiment, the second service terminal 2 is a touch screen all-in-one machine. Of course, an IPAD may also be used.
It should be noted that: the second service terminal 2 may also be configured without a touch display screen, and the second service terminal 2 interacts with the user graphic interface through a display provided on the first service terminal 1.
Referring to fig. 3, the present invention further provides a human-computer interaction method applied to the service robot, including the following steps:
the user wakes up the service robot;
the second service terminal sends a communication instruction and informs the first service terminal to enter a user voice dictation state;
the first service terminal collects the speaking content of a user and transmits the speaking content to the second service terminal;
the second service terminal screens text retrieval information matched with the user consultation information from the knowledge database according to the user consultation information, feeds the text retrieval information back to the user through voice and/or pictures and texts, and the user obtains required result information;
and finishing the human-computer interaction session, and enabling the service robot to enter a state of waiting for the user to wake up.
In this embodiment, when the service robot is in a state of waiting for the user to wake up, and when the user approaches the service robot, the distance is between 0cm and 60cm, the first service terminal senses that the user approaches through the infrared sensor and sends a wake-up instruction to the second service terminal; or, the user can send the wake-up instruction to the second service terminal by clicking a display installed on the first service terminal or a touch display screen installed on the second service terminal.
Of course, in other embodiments, the service robot may also wake up through the facial features of the user captured by the camera. For example: the service robot is in a state of waiting for the user to wake up, and when the user is close to the service robot, the first service terminal captures facial features of the user through a camera and sends a wake-up instruction to the second service terminal. Of course, when the user approaches the service robot, the service robot can be woken up by speaking the preset wake-up word of the service robot.
In this embodiment, after the service robot performs the waking step, the second service terminal sends a communication instruction and informs the first service terminal to enter a user voice dictation state. The communication instructions include welcome and/or question-and-answer guidance instructions to guide the interaction between the user and the robot. The first service terminal carries out voice synthesis on the welcome language and/or the question and answer guidance language instruction through the voice synthesis module and plays the welcome language and/or the question and answer guidance language through the loudspeaker, and after the question and answer guidance language is played, the first service terminal enters a user voice dictation state through the microphone.
In this embodiment, when the first service terminal of the service robot is in the user voice dictation state, the user starts speaking to clarify the user consultation information, and the first service terminal performs voice recognition on voice information collected by the microphone through the voice recognition module and transmits the voice information to the second service terminal. And the second service terminal screens text retrieval information matched with the user consultation information from the knowledge database according to the user consultation information, embodies the text retrieval information on the touch display screen in an image form, and simultaneously sends a retrieval information guidance wording instruction to the first service terminal to prompt the user to browse and/or operate the touch display screen in a voice mode. In the interactive session process of the user and the service robot, more required result information can be obtained by repeating the human-computer interactive session for many times.
In this embodiment, when the user obtains the required result information and completes the human-computer interaction session or the touch display screen exceeds the page waiting time, the second service terminal sends an instruction for ending the session to the first service terminal, and the service robot enters a state of waiting for the user to wake up, so as to provide services for other users.
The above embodiments are only used to illustrate the present invention and do not limit the technical solutions described in the present invention; thus, although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted; all such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.
Claims (4)
1. A service robot, comprising:
the first service terminal is used for voice interaction with a user;
the second service terminal is used for storing the application scene knowledge database and interacting with the user image-text interface;
the first service terminal is in network communication connection with the second service terminal, and receives a control instruction of the second service terminal to output voice;
the second service terminal is a touch screen all-in-one machine.
2. The service robot of claim 1, wherein: the first service terminal is provided with an infrared sensor for sensing a user, a microphone for acquiring voice information of the user, a loudspeaker for playing voice, and a first information processing module respectively connected with the infrared sensor, the microphone and the loudspeaker;
the second service terminal is provided with a touch display screen, a data storage unit for storing a knowledge database, and a second information processing module respectively connected with the touch display screen and the data storage unit;
the first information processing module and the second information processing module are both provided with communication modules and connected to the same wifi module.
3. The service robot of claim 2, wherein: the first service terminal is located at the upper end of the service robot and serves as the head of the service robot.
4. The service robot of claim 3, wherein: the first service terminal is provided with a display used for showing the facial expression, and the display is in signal connection with the first information processing module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201920911739.6U CN211466402U (en) | 2019-06-17 | 2019-06-17 | Service robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201920911739.6U CN211466402U (en) | 2019-06-17 | 2019-06-17 | Service robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN211466402U true CN211466402U (en) | 2020-09-11 |
Family
ID=72376487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201920911739.6U Active CN211466402U (en) | 2019-06-17 | 2019-06-17 | Service robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN211466402U (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110154056A (en) * | 2019-06-17 | 2019-08-23 | 常州摩本智能科技有限公司 | Service robot and its man-machine interaction method |
-
2019
- 2019-06-17 CN CN201920911739.6U patent/CN211466402U/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110154056A (en) * | 2019-06-17 | 2019-08-23 | 常州摩本智能科技有限公司 | Service robot and its man-machine interaction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110154056A (en) | Service robot and its man-machine interaction method | |
CN104102346A (en) | Household information acquisition and user emotion recognition equipment and working method thereof | |
CN205490994U (en) | Multi -functional intelligent sound box | |
CN106157956A (en) | The method and device of speech recognition | |
WO2021212388A1 (en) | Interactive communication implementation method and device, and storage medium | |
CN109756770A (en) | Video display process realizes word or the re-reading method and electronic equipment of sentence | |
CN207139820U (en) | A kind of intelligent robot | |
CN109619801A (en) | Intelligent cosmetic mirror | |
CN112652200A (en) | Man-machine interaction system, man-machine interaction method, server, interaction control device and storage medium | |
CN108762512A (en) | Human-computer interaction device, method and system | |
EP4199488A1 (en) | Voice interaction method and electronic device | |
CN110825164A (en) | Interaction method and system based on wearable intelligent equipment special for children | |
CN112860169A (en) | Interaction method and device, computer readable medium and electronic equipment | |
CN109819167A (en) | A kind of image processing method, device and mobile terminal | |
JP2004214895A (en) | Auxiliary communication apparatus | |
CN211466402U (en) | Service robot | |
CN110473436A (en) | A kind of reading assisted learning equipment | |
CN110399810A (en) | A kind of auxiliary magnet name method and device | |
Chen et al. | Human-robot interaction based on cloud computing infrastructure for senior companion | |
CN108388399B (en) | Virtual idol state management method and system | |
CN206998936U (en) | A kind of intelligent sound robot | |
CN117371459A (en) | Conference auxiliary system and method based on intelligent voice AI real-time translation | |
JP2002261966A (en) | Communication support system and photographing equipment | |
CN111985252A (en) | Dialogue translation method and device, storage medium and electronic equipment | |
Jean et al. | Development of an office delivery robot with multimodal human-robot interactions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |