CN211164044U - Robot expression display system based on voice interaction - Google Patents

Robot expression display system based on voice interaction Download PDF

Info

Publication number
CN211164044U
CN211164044U CN201922215766.2U CN201922215766U CN211164044U CN 211164044 U CN211164044 U CN 211164044U CN 201922215766 U CN201922215766 U CN 201922215766U CN 211164044 U CN211164044 U CN 211164044U
Authority
CN
China
Prior art keywords
robot
dot matrix
module
expression
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201922215766.2U
Other languages
Chinese (zh)
Inventor
纪刚
吴庭永
李彦
臧强
安帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Lianhe Chuangzhi Technology Co ltd
Original Assignee
Qingdao Lianhe Chuangzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Lianhe Chuangzhi Technology Co ltd filed Critical Qingdao Lianhe Chuangzhi Technology Co ltd
Priority to CN201922215766.2U priority Critical patent/CN211164044U/en
Application granted granted Critical
Publication of CN211164044U publication Critical patent/CN211164044U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Toys (AREA)

Abstract

The utility model belongs to the technical field of the robot equipment, a robot expression display system based on voice interaction is related to, including the MIC array, voice processing module, the host computer, the MCU control panel, dot matrix drive chip and L ED lamp array, the MIC array is installed on the neck of robot, the MIC array is connected with voice processing module, voice processing module installs in the robot thorax casing, voice processing module and host computer connection, the host computer is installed in the shank of robot, the host computer is connected with the MCU control panel, the MCU control panel is installed inside the upper portion casing of robot, the MCU control panel is connected with dot matrix drive chip, dot matrix drive chip installs in the robot cephalic phase, dot matrix drive chip and L ED lamp array electric connection, L ED lamp array installs terminal surface before the robot head, this system architecture is reasonable, the robot expression is abundant, the language discernment ability is strong, can be fast accurate show the expression according to the language of discernment, help exchanging and friendly interdynamic with the robot.

Description

Robot expression display system based on voice interaction
The technical field is as follows:
the utility model belongs to the technical field of the robot equipment, a expression system of robot is controlled through discernment pronunciation is related to, concretely relates to robot expression display system based on voice interaction.
Background art:
with the continuous development of the field of intelligent robots, the robot technology is rapidly expanding from the field of industrial manufacturing to the fields of medical service, entertainment education, home-based care for the aged and the like, and in the process of accompanying with the positions of human beings, vivid emotion expression and transmission are more and more important, the appearance of the expression of the robot enables the positions of the intelligent robot and the human beings to become novel and interesting, and the communication and interaction between the human beings and the robot are deepened; robot expressions have been widely used on service-type robots, becoming an indispensable part of communications with humans.
In the prior art, the chinese patent with publication number CN205721625U discloses an expression robot interaction system, which comprises a data acquisition module, an upper computer, a data processing module, a control module, an execution module, a motion state display module and an expression robot body, wherein the data acquisition module outputs and is connected with the upper computer, the upper computer is connected with the control module through the data processing module, the control module respectively outputs and is connected with the execution module and the motion state display module, the data acquisition module acquires external control information and sends the external control information to the upper computer, the upper computer sends the external control signal to the data processing module, the data processing module identifies and processes the external control signal and sends the identification result to the control module, the control module generates a control command corresponding to the identification result and sends the control command to the execution module, the execution module drives a voice output device to realize corresponding voice output or drives a motor to control an expression robot body to realize corresponding facial organ motion according to the control command, the facial expression robot body can quickly and accurately identify facial expression of a user, so that a human-computer interaction process is more interesting, the chinese patent with publication number CN107632562A discloses a robot realizing expression through a dot matrix lamp, which comprises a touch image acquisition module, an infrared signal acquisition module, an ED 2 acquisition module, an ED signal acquisition module and an ED signal acquisition module, an ED signal acquisition module are connected with an ED signal acquisition module, an ED signal.
In summary, at present, the robot has a single expression, is not vivid enough, is not accurate in language recognition, and cannot accurately display vivid expressions according to languages, but in the accompanying process with human beings, vivid emotional expression and transmission are more and more important, so that a flexible and vivid robot expression display system based on voice interaction is designed.
The utility model has the following contents:
the utility model aims to overcome the shortcoming that existing equipment exists, to current robot expression single, lively nimble, language identification is not accurate enough, can not be according to the accurate demonstration lively expression of language not enough, design a robot expression display system based on voice interaction.
In order to realize the above-mentioned purpose, the utility model relates to a robot expression display system based on voice interaction, its major structure includes the MIC array, voice processing module, the host computer, the MCU control panel, dot matrix drive chip and L ED lamp array, the MIC array adopts the array structure that four above mike are constituteed, the mike equidistant installation of MIC array is on the neck of robot, the MIC array passes through the audio line and is connected with voice processing module, voice processing module installs in robot neck below chest casing, voice processing module passes through the wiFi module and is connected with the host computer, the host computer is installed in the shank casing of robot, the host computer passes through the USB interface and is connected with the MCU control panel, the MCU control panel is installed inside the upper portion casing of robot, the MCU control panel passes through the IIC bus and is connected with dot matrix drive chip, dot matrix drive chip installs in robot head casing, dot matrix drive chip and L ED lamp array electric connection, L ED lamp array is installed at robot head front end.
L ED lamp array comprises 16 pieces of L ED dot matrix lamp plates on 8 rows × 8 columns, wherein the left eye and the right eye of robot head constitute by the L ED dot matrix lamp plate on 4 8 rows × 8 columns respectively, the mouth of robot head comprises the L ED dot matrix lamp plate on 8 rows × 8 columns, the L ED dot matrix lamp plate of left eye and right eye is controlled by a dot matrix drive chip respectively, the L ED dot matrix lamp plate of mouth is by two dot matrix drive chip controls, four dot matrix drive chips cascade.
The upper computer of the utility model is provided with a data transmission module and a voice analysis module, and the MCU control panel is provided with a data analysis module, a flash storage module and a calling module; the data transmission module is connected and communicated with the data analysis module, and the data analysis module is connected and communicated with the flash storage module; the voice analysis module is respectively connected and communicated with the voice processing module and the calling module, and the calling module is also respectively connected and communicated with the dot matrix driving chip and the flash storage module.
Compared with the prior art, the utility model, the robot expression display system based on voice interaction that designs is reasonable scientific, and structural function is perfect, and the robot expression is abundant, and language recognition ability is strong, according to the quick accurate expression that shows of the language of discernment, can help the interchange and the interdynamic of mankind and robot, uses environment friendly.
Description of the drawings:
fig. 1 is a schematic block diagram of a structural principle of a robot expression display system based on voice interaction according to the present invention.
Fig. 2 is a schematic view of the structural principle of the robot according to the present invention.
Fig. 3 is a schematic block diagram of the structure principle of the functional module of the upper computer and the MCU control panel.
Fig. 4 is a block diagram of a process flow of expression analysis storage according to the present invention.
Fig. 5 is a block diagram of a process flow of expression interaction according to the present invention.
Fig. 6 is an initial interface schematic diagram of the robot expression aided design tool according to the present invention.
Fig. 7 is a schematic diagram of the expression lamp array lighted by the mark.
Fig. 8 is a schematic diagram of the robot expression aided design tool deriving the lamp array model.
Fig. 9 is a schematic diagram of the L ED lamp array according to the present invention.
The specific implementation mode is as follows:
the present invention will be further described with reference to the following examples and accompanying drawings.
Example 1:
the robot expression display system based on voice interaction comprises a main structure as shown in fig. 1-2, an MIC array 1, a voice processing module 2, an upper computer 3, an MCU control board 4, a dot matrix driving chip 5 and a L ED lamp array 6, wherein the MIC array 1 adopts an array structure formed by six microphones, the microphones of the MIC array 1 are arranged on the neck of a robot at equal intervals, the MIC array 1 is used for collecting voice information data of a user in an interaction process, the MIC array 1 is connected with the voice processing module 2 through an audio line, the voice processing module 2 is arranged in a chest shell below the neck of the robot, the voice processing module 2 needs to be arranged close to the MIC array 1 as far as possible, because a signal collected by the MIC array 1 is a millivolt-level sinusoidal analog voice signal, so that collected voice information is not interfered too much in a transmission process, the voice processing module 2 is used for processing voice data obtained from the MIC array 1, converting the voice data into digital information which can be recognized by the upper computer 3, the MCU control board 2 is connected with the upper computer 3 through a WiFi module, the USB chip 3, the LED chip 3 is used for controlling the operation of the MCU array driving chip 3, the MCU chip 3, the LED chip is used for controlling the expression data, the expression data and transmitting the whole expression data, the expression data is used for controlling the expression data of the expression data, the expression data is used for controlling the expression data, the expression data is used for controlling the expression data, the expression data is used for controlling and displaying of the expression data of.
As shown in fig. 9, the L ED lamp array 6 according to this embodiment is composed of 16 × 0ED dot matrix lamp panels in 8 rows × columns, wherein the left eye and the right eye of the head of the robot are respectively composed of 4 × ED dot matrix lamp panels in 8 rows × 18 columns, that is, the × ED dot matrix lamp panel in two eyes is equivalent to × ED dot matrices (No. 1 dot matrix group and No. 2 dot matrix group) in 16 rows × column, the mouth of the head of the robot is composed of × ED dot matrix lamp panels in 8 rows × columns, the L ED dot matrix lamp panel in the mouth is equivalent to L ED dot matrices (No. 3 dot matrix group and No. 4 dot matrix group) in two 16 rows × columns, the L ED dot matrices in the left eye and the right eye are respectively controlled by one driving chip 5, the L ED dot matrix lamp panel in the mouth is controlled by two driving chips 5, the four driving chips 5 are cascaded, so that the oscillation signals of all the driving chips 5 are kept highly uniform, thereby avoiding the occurrence of erroneous oscillation or delaying the occurrence of the dot matrix display, and realizing the efficient control of the whole dot matrix lamp panel 896 and the expression of the robot.
As shown in fig. 3, the upper computer 3 according to this embodiment is loaded with and operated with a data transmission module 7 and a voice analysis module 8, and the MCU control board 4 is loaded with and operated with a data analysis module 9, a flash memory module 10, and a call module 11; the data transmission module 7 is connected and communicated with the data analysis module 9, and the data analysis module 9 is connected and communicated with the flash storage module 10; the voice analysis module 8 is respectively connected and communicated with the voice processing module 2 and the calling module 11, and the calling module 11 is also respectively connected and communicated with the dot matrix driving chip 5 and the flash storage module 10;
wherein the data transmission module 7 is configured to: receiving a data file containing expression position data, which is exported by a user through a robot expression aided design tool or Visio drawing software, opening the data file to number expressions, establishing a robot expression database, and transmitting an expression storage instruction and the robot expression database to the MCU control panel 4;
the speech analysis module 8 is configured to: receiving voice data transmitted by the voice processing module 2, extracting expression keywords from the voice data by using a voice processing algorithm specially designed in the prior art, comprehensively comparing and analyzing environmental semantics of an interaction process, comparing the expression keywords with a robot expression database, determining expressions and numbers thereof to be displayed by the robot, and transmitting expression calling instructions and IDs (identities) of the expressions to be displayed to the MCU control panel 4;
the data analysis module 9 is configured to: receiving the expression storage instruction and the robot expression database sent by the data transmission module 7, analyzing the robot expression database, analyzing whether expression position data are abnormal or not, feeding an abnormal signal back to the upper computer 3, and storing normal expression position data and the serial number into the flash storage module 10;
the flash memory module 10 is configured to: storing the expression position data and the serial number, receiving a calling signal and transmitting the stored expression position data to the dot matrix driving chip 5;
the calling module 11 is used for: after receiving the expression calling instruction and the corresponding expression ID number transmitted by the voice analysis module 8, searching whether a specified expression is input in the flash storage module 10, calling out expression position data of the specified expression, and sending the expression position data to the dot matrix driving chip 5, and if the specified expression does not exist, feeding back an abnormal signal to prompt the upper computer 3 to confirm a new expression ID.
The embodiment relates to a robot expression display system based on voice interaction, which comprises the following specific process flows:
the method comprises the following steps of (A) simulating and analyzing the real life of people, and correspondingly investigating the expression commonly used by the people to summarize 26 commonly used expressions (the 26 commonly used expressions are smiling, crying, sadness, conscientiousness, surprise, shyness, flower, wealth, anger, skin regulation, silence, eyesight giving, keeping, dismissing, faint, doubts, jiong, trustiness, joy, fear, anger, kiss, sweat, serious, acquaintance, crying and praise), drawing the 26 expressions by using the existing robot expression design method and using Visio software or a robot expression auxiliary design tool, converting the expressions into a data file with a specified format which can be called by an MCU control board 4, modeling the data file by using L lattice ED, establishing an expression database of a robot, and numbering the expression of each robot by setting up a lattice ED, wherein the specific modeling process comprises the following steps:
firstly, for example, as shown in fig. 6, a peripheral computer is used to open a robot expression aided design tool or Visio drawing software, and a basic layout model of L ED dot matrix is displayed;
then, as shown in fig. 7, L ED lights to be lit are marked in the design tool according to the animation expression that is desired to be achieved;
finally, as shown in fig. 8, a peripheral computer is connected to the upper computer 3, a designed dot matrix model is exported to a data file with a special format by using a robot expression aided design tool, the data file is sent to the upper computer 3, the data file is opened after being received by a data transmission module 7 of the upper computer 3 to number expressions, a robot expression database is established, and then the robot expression database and an expression storage instruction are sent to the MCU control board 4 through the data transmission module 7 to be stored;
after receiving the expression storage instruction and the robot expression database, the MCU control board 4 analyzes the robot expression database, analyzes whether expression position data is abnormal or exceeds the display range of the L ED lamp array 6, stores the expression position data and the serial number into a flash storage module 10 of the MCU control board 4 for management if the expression position data is normal, and feeds a data abnormal signal back to the upper computer 3 to prompt the upper computer 3 to modify the data format if the expression position data is abnormal;
(III) through voice interaction, the robot displays the expressions:
(1) the MIC array 1 collects voice information of a user in an interaction process and transmits the voice information to the voice processing module 2 of the robot through an audio transmission line;
(2) the voice processing module 2 converts and analyzes the voice data (analog signals) into data (digital signals) which can be identified by the upper computer 3, and the data (digital signals) is received and extracted by the upper computer 3 in a specified protocol format;
(3) the upper computer 3 extracts expression keywords from the protocol data of the voice processing module 2, comprehensively compares and analyzes the environmental semantics of the interactive process, and completes the processing process by using the existing specially designed voice processing algorithm; the speech processing algorithm can avoid the situation that the expression analyzed by the keyword only possibly has incoherence with semantics;
(4) the upper computer 3 processes and analyzes the voice information of the user, compares the expression keywords with the robot expression database, and determines the expression and the number thereof which should be displayed by the robot (the expression needs to be in the robot expression database);
(5) the upper computer 3 transmits the expression calling instruction and the ID of the expression to be displayed to the MCU control panel 4 through a USB data transmission line according to a specified protocol, after the MCU control panel 4 receives the expression calling instruction and the corresponding expression ID number, whether a specified expression is recorded in the flash storage module 10 is searched, if the specified expression exists, expression position data of the specified expression is called out and sent to the dot matrix driving chip 5, the dot matrix driving chip 5 drives L ED lamp array 6 to display the expression, and if the expression does not exist, an abnormal signal is fed back to prompt the upper computer 3 to confirm the new expression ID.

Claims (4)

1. A robot expression display system based on voice interaction is characterized in that a main structure comprises an MIC array, a voice processing module, an upper computer, an MCU control board, a dot matrix driving chip and an L ED lamp array, the MIC array is of an array structure formed by more than four microphones, the microphones of the MIC array are arranged on the neck of a robot at equal intervals, the MIC array is connected with the voice processing module through audio wires, the voice processing module is arranged in a chest shell below the neck of the robot, the voice processing module is connected with the upper computer through a WiFi module, the upper computer is arranged in a leg shell of the robot, the upper computer is connected with the MCU control board through a USB interface, the MCU control board is arranged inside an upper shell of the robot, the MCU control board is connected with the dot matrix driving chip through an IIC bus, the dot matrix driving chip is arranged in a head shell of the robot, the dot matrix driving chip is electrically connected with the L ED lamp array, and the L ED.
2. The system of claim 1, wherein the L ED lamp array is composed of 16L ED dot matrix lamp panels with 8 rows and × 8 columns, the left eye and the right eye of the head of the robot are respectively composed of 4L ED dot matrix lamp panels with 8 rows and × 8 columns, the mouth of the head of the robot is composed of 8L ED dot matrix lamp panels with 8 rows and × 8 columns, the L ED dot matrix lamp panels of the left eye and the right eye are respectively controlled by one dot matrix driving chip, the L ED dot matrix lamp panel of the mouth is controlled by two dot matrix driving chips, and four dot matrix driving chips are cascaded.
3. The speech interaction-based robotic expression display system of claim 2, wherein: the upper computer is provided with a data transmission module and a voice analysis module, and the MCU control panel is provided with a data analysis module, a flash storage module and a calling module; the data transmission module is connected and communicated with the data analysis module, and the data analysis module is connected and communicated with the flash storage module; the voice analysis module is respectively connected and communicated with the voice processing module and the calling module, and the calling module is also respectively connected and communicated with the dot matrix driving chip and the flash storage module.
4. The speech interaction-based robotic expression display system of claim 1, wherein: the MIC array adopts an array structure consisting of six microphones.
CN201922215766.2U 2019-12-12 2019-12-12 Robot expression display system based on voice interaction Active CN211164044U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201922215766.2U CN211164044U (en) 2019-12-12 2019-12-12 Robot expression display system based on voice interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201922215766.2U CN211164044U (en) 2019-12-12 2019-12-12 Robot expression display system based on voice interaction

Publications (1)

Publication Number Publication Date
CN211164044U true CN211164044U (en) 2020-08-04

Family

ID=71817928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201922215766.2U Active CN211164044U (en) 2019-12-12 2019-12-12 Robot expression display system based on voice interaction

Country Status (1)

Country Link
CN (1) CN211164044U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112687268A (en) * 2020-12-03 2021-04-20 佛山科学技术学院 Voice emotion interaction control method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112687268A (en) * 2020-12-03 2021-04-20 佛山科学技术学院 Voice emotion interaction control method and system

Similar Documents

Publication Publication Date Title
TWI430189B (en) System, apparatus and method for message simulation
US9431027B2 (en) Synchronized gesture and speech production for humanoid robots using random numbers
CN106985137A (en) Multi-modal exchange method and system for intelligent robot
KR101344727B1 (en) Apparatus and method for controlling intelligent robot
Perzanowski et al. Integrating natural language and gesture in a robotics domain
KR20030007713A (en) Action teaching apparatus and action teaching method for robot system, and storage medium
CN108009490A (en) A kind of determination methods of chat robots system based on identification mood and the system
CN105807925A (en) Flexible electronic skin based lip language identification system and method
JP6319772B2 (en) Method and system for generating contextual behavior of a mobile robot performed in real time
CN211164044U (en) Robot expression display system based on voice interaction
CN107003823A (en) Wear-type display system and head-mounted display apparatus
CN112232127A (en) Intelligent speech training system and method
Chang et al. A kinect-based gesture command control method for human action imitations of humanoid robots
CN112329593A (en) Gesture generation method and gesture generation system based on stylization
CN108922274A (en) A kind of English assistant learning system of multifunctional application
CN106096716A (en) A kind of facial expression robot multi-channel information emotional expression mapping method
Aiswarya et al. Hidden Markov model-based Sign Language to speech conversion system in TAMIL
CN109272983A (en) Bilingual switching device for child-parent education
CN111984161A (en) Control method and device of intelligent robot
CN205376116U (en) Automatic dolly remote control unit that guides of wireless directional speech control
CN115167674A (en) Intelligent interaction method based on digital human multi-modal interaction information standard
CN109003481A (en) Split type intelligent language learner
Dusan et al. Adaptive dialog based upon multimodal language acquisition
CN201025562Y (en) A portable intelligent language learning machine
CN211466402U (en) Service robot

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant