CN112536808A - Emotion communication device of pet robot - Google Patents
Emotion communication device of pet robot Download PDFInfo
- Publication number
- CN112536808A CN112536808A CN202011466094.3A CN202011466094A CN112536808A CN 112536808 A CN112536808 A CN 112536808A CN 202011466094 A CN202011466094 A CN 202011466094A CN 112536808 A CN112536808 A CN 112536808A
- Authority
- CN
- China
- Prior art keywords
- pet robot
- module
- output module
- image
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 52
- 230000006854 communication Effects 0.000 title claims abstract description 30
- 238000004891 communication Methods 0.000 title claims abstract description 29
- 210000003128 head Anatomy 0.000 claims abstract description 37
- 230000009471 action Effects 0.000 claims abstract description 26
- 230000014509 gene expression Effects 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims abstract description 5
- 210000003414 extremity Anatomy 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 9
- 230000036541 health Effects 0.000 claims description 6
- 239000000523 sample Substances 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 5
- 210000003141 lower extremity Anatomy 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 4
- 210000001364 upper extremity Anatomy 0.000 claims description 4
- 230000005236 sound signal Effects 0.000 claims description 3
- 230000003862 health status Effects 0.000 claims description 2
- 230000002996 emotional effect Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 5
- 241000282472 Canis lupus familiaris Species 0.000 description 4
- 238000000034 method Methods 0.000 description 4
- 241000282326 Felis catus Species 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000474 nursing effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 208000025174 PANDAS Diseases 0.000 description 1
- 208000021155 Paediatric autoimmune neuropsychiatric disorders associated with streptococcal infection Diseases 0.000 description 1
- 240000000220 Panda oleosa Species 0.000 description 1
- 235000016496 Panda oleosa Nutrition 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000035876 healing Effects 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000036186 satiety Effects 0.000 description 1
- 235000019627 satiety Nutrition 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/001—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
The invention discloses an emotion communication device of a pet robot, which comprises a processor, a data acquisition module, a motion module, an image output module and a voice output module, wherein the data acquisition module can acquire data information and send the data information to the processor; the processor receives the distance information, the image information, the sound information and the touch information sent by the data acquisition module, then controls the four limbs and the head action of the pet robot through the movement module, displays the eye picture through the image output module to simulate the emotion change of the eyes of the pet, and sends out sound through the voice output module; different actions of the head and the four limbs of the pet robot are achieved, a display is arranged at the eye position of the pet robot, the actions of the eyes are simulated, different icons are displayed to show different emotions of the pet robot, conversation is not carried out simply through sound and people, and therefore the communication mode between the pet robot and the people with richer emotional expression is achieved.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to an emotion communication device of a pet robot.
Background
A pet robot is a machine for simulating pets such as cats and dogs raised by humans, and is used as a substitute for pets in close proximity to elderly people, children, nursing facilities, nursing family members, owners, or others, and achieves the sense of cure and comfort of pets contacting them by using robotics.
As shown in fig. 1, a schematic diagram of information communication between a human and a robot in the prior art is shown, and the information communication is different from emotion communication, and for the purpose of transmitting and receiving information, the robot is provided with a display screen of a personal computer, and communicates with the robot by operating the display screen or by voice.
The communication between the existing robot and the pet is generally limited to a simple conversation, and various emotions cannot be expressed through the physical action of the pet robot. For example, people live together with pets, owners can touch the pets with hands to express love, and the pets can respond to the love, but the pet robot is difficult to simulate actions of animals such as dogs and cats, and the owners are difficult to know the current emotion of the pets.
Disclosure of Invention
The invention aims to solve the technical problem of providing an emotion communication device of a pet robot, which simulates the actions of eyes and displays different icons to show different emotions of the pet robot by different actions of the head and four limbs of the pet robot and arranging a display at the eye position of the pet robot, and realizes a communication mode between the pet robot and a person with richer emotion expression without simply carrying out conversation with sound and the person.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the emotion exchange device of the pet robot comprises a power supply module arranged in the pet robot, and a processor, a data acquisition module, a motion module, an image output module and a voice output module which are respectively connected with the power supply module, wherein the data acquisition module, the motion module, the image output module and the voice output module are respectively and electrically connected with the processor;
the data acquisition module can acquire the distance between the pet robot and a surrounding object, image information in front of the pet robot, human voice and environmental sound near the pet robot and touch signals on the surface of the pet robot and send the acquired distance information, image information, sound information and touch information to the processor;
the pet robot comprises a body, four limbs and a head, wherein the four limbs and the head are movably mounted on the body, the processor receives distance information, image information, sound information and touch information sent by the data acquisition module, then the four limbs and the head of the pet robot are controlled to move through the movement module, an eye picture is displayed through the image output module to simulate emotion changes of the eyes of a pet, and meanwhile, sound is sent out through the voice output module;
when the motion module of the pet machine is limited in motion, the processor can output corresponding help signals to the image output module and/or the voice output module, display the eye pictures through the image output module to simulate the help state of the eyes of the pet, and/or make sounds through the voice output module.
Preferably, the data acquisition module comprises a distance sensor, an image sensor, a microphone and a touch sensor;
the distance sensor is arranged at the head of the pet robot, the probe is arranged forwards, and the distance sensor is used for measuring the distance of an object in front of the pet robot and sending the detected distance value to the processor;
the image sensor is arranged at the head of the pet robot, and the probe is arranged forwards and is used for acquiring image information in front of the pet robot in real time and sending the image information to the processor;
the microphone picks up human voice and the voice of the surrounding environment, and then sends a voice signal to the processor;
touch sensor is provided with a plurality ofly, sets up respectively at body, head and four limbs, detects the people and touches pet robot after, the touch sensor that corresponds sends corresponding touch signal to treater.
Preferably, the processor includes an image processing unit, a sound processing unit, a distance determination unit, and a touch determination unit,
the image processing unit analyzes the received image signals, analyzes whether a person exists and the action and expression of the person, converts the image signals into corresponding control instructions and transmits the control instructions to the motion module, the image output module and the voice output module respectively;
the sound processing unit analyzes the received sound signal, analyzes a sound instruction of a person, converts the sound instruction into a corresponding control instruction and transmits the control instruction to the motion module, the image output module and the voice output module;
the distance judgment unit receives the distance signal, is matched with the image processing unit, analyzes the positions of the obstacle in front of the pet robot and the position of the owner, converts the position into a corresponding instruction, outputs the instruction to the motion module to avoid the obstacle and can follow the owner to move;
the touch judging unit receives the touch signal, analyzes and judges the touch position, analyzes the emotion expressed by the owner by matching with the image processing unit and the sound processing unit, converts the emotion into corresponding control instructions and respectively transmits the control instructions to the motion module, the image output module and the voice output module.
Preferably, the touch sensor comprises a sensor body and an electrostatic detection circuit, the electrostatic detection circuit is matched with the sensor body to detect the change of electrostatic current on the surface of the pet robot, judge whether a person stroks the pet robot and a stroking position, and then output a corresponding touch signal to the processing unit.
Preferably, the exercise module includes a plurality of motors respectively disposed between the head and the body, between the limbs and the body, and at joints of the limbs, and is capable of driving the head to swing up and down and/or swing left and right with respect to the body, and driving the limbs to move with respect to the body and driving the lower limbs of the limbs to move with respect to the upper limbs.
Preferably, the image output module includes two displays respectively located at left and right eye positions of the head, and the emotion of the pet robot is expressed by displaying the movement of the eyes and/or different images.
Preferably, the processor comprises a WIFI and Blue Tooth communication unit, the WIFI and Blue Tooth communication unit can be in wireless connection with the intelligent watch or the intelligent bracelet, real-time data of whether the person wearing the intelligent watch or the intelligent bracelet is healthy or not is acquired, whether the wearer is healthy or not is judged through the processor, when the processor judges that the health condition of the wearer is abnormal, the processor can be converted into corresponding control instructions to be respectively transmitted to the motion module, the image output module and the voice output module, further, the control instructions are communicated with the wearer, and short messages or electronic mails are sent to inform a preset contact person.
Preferably, the processor further comprises an AI knowledge unit, an AI learning unit and a sensor signal recording unit, wherein the sensor signal recording unit records the information from the data acquisition module and the action taken by the pet robot according to the time sequence, and the information and the action taken by the pet robot are provided to the AI learning unit as a learning material together with the AI knowledge unit storing the basic judgment information, so that the pet robot can more accurately judge the instruction of the person, the position of the obstacle and the health condition of the wearer.
Preferably, the distance sensor, the image sensor, the microphone, the touch sensor and the motor are all connected with the processor through an I2C signal distribution circuit.
Preferably, the pet robot is further provided with a 2I2C signal distribution circuit, and the 2I2C signal distribution circuit and the I2C signal distribution circuit cooperate to output an I2C control signal to a motor driving circuit for controlling the operation of a motor.
Compared with the prior art, the emotion communication device of the pet robot has the advantages that,
1. the pet robot can pick up the voice of the owner, and can also obtain the expression and the action of the owner in real time through the wide-angle camera serving as the image sensor, so that the emotion expression and the corresponding instruction information of the owner are obtained through analysis, then the corresponding emotion expression is made, the owner can issue an instruction through the voice, the gesture and the facial expression, and the conversation with the owner is not performed simply, so that the communication mode between the pet robot and the person with richer emotion expression is realized;
2. the display is selected as the eyes, the display can express the emotion of the pet robot through the icons besides the eye movement of the pet, the emotional communication between the eyes of the pet robot and the eyes of the person can be smoothly realized, and the pet robot can express the emotion through the whole body action including the four limbs and the head;
3. the pet robot can simulate the sound of a pet to be required to come out of a cage or be required to go out of a door and the like by using a chirping sound, and can make a sound or seek the help of a master through the eye and actions when the pet robot cannot maintain the walking state, so that the emotional communication between the pet robot and a person can be reflected more truly, and the healing effect of the pet robot can be reflected more;
4. the pet robot can also monitor the physical health condition of the owner and is used as a guarding type pet robot.
Drawings
FIG. 1 is a diagram illustrating a person communicating with a robot in the prior art;
FIG. 2 is a schematic diagram illustrating emotional communication between the pet robot and the owner in the embodiment;
FIG. 3 is a schematic structural diagram of the pet robot in the present embodiment;
FIG. 4 is a block diagram of the emotion exchange process of the pet robot in the present embodiment;
FIG. 5 is a flowchart illustrating the emotional communication process of the pet robot according to the embodiment;
fig. 6 is an image sample of the display of the eye position of the pet robot in this embodiment.
In the figure, 1, a processor; 101. an image processing unit; 102. a sound processing unit; 103. a distance judgment unit; 104. a touch determination unit; 105. a WIFI, Blue Tooth communication unit; 106. an AI knowledge unit; 107. an AI learning unit; 108. a sensor signal recording unit; 2. an I2C signal distribution circuit; 3. a distance sensor; 4. a wide-angle camera; 5. a display; 6. a microphone; 7. a motor; 8. 2I2C signal distribution circuitry; 9. a motor drive circuit; 10. an electrostatic sensor circuit; 11. a touch sensor; 12. a speaker; 13. a power source; 14. a data acquisition module; 15. and a wireless transceiving module.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The appearance of the pet robot can simulate quadruped animals such as dogs, cats, pandas and the like, and is convenient to identify.
The embodiment takes a dog as an example to simulate the emotional communication between the owner and the pet robot, as shown in fig. 2-6.
The emotion exchange device of the pet robot comprises a power supply 13 module arranged in the pet robot, and a processor 1, a data acquisition module 14, a motion module, an image output module and a voice output module which are respectively connected with the power supply 13 module, wherein the data acquisition module 14, the motion module, the image output module and the voice output module are respectively and electrically connected with the processor 1. Wherein the processor 1 is arranged in the head.
The data acquisition module 14 can acquire the distance between the pet robot and the surrounding object, the image information in front of the pet robot, the voice and the environmental sound near the pet robot, and the touch signal on the surface of the pet robot, and transmit the acquired distance information, image information, voice information, and touch information to the processor 1.
The pet robot comprises a body, four limbs and a head, wherein the four limbs and the head are movably mounted on the body, the processor 1 receives distance information, image information, sound information and touch information sent by the data acquisition module 14, then the four limbs and the head of the pet robot are controlled to move through the movement module, the emotion change of the eyes of a pet is simulated by showing the pictures of the eyes through the image output module, and meanwhile, the pet robot makes a sound through the voice output module. When the voice command of the owner is recognized and it is confirmed as a command of lying prone, standing, sitting, walking, turning around, etc., the command action issued by the owner is performed while balancing the entire body by moving the limbs and the head, and at the same time the display 5 displays the corresponding icon.
When the motion module of the pet machine is limited in motion, the processor 1 can output a corresponding help signal to the image output module and/or the voice output module, display the eye picture through the image output module to simulate the help state of the eyes of the pet, and/or make a sound through the voice output module. For example, when the pet robot cannot go up stairs, fall into a hollow place and escape, and cannot walk normally with the sunny side facing the sky, the pet robot can seek help to the owner by using the actions of eyes, voice and four feet.
The data acquisition module 14 comprises a distance sensor 3, an image sensor, a microphone 6 and a touch sensor 11; the distance sensor 3 is an infrared distance sensor 3, is specifically arranged on the head of the pet robot, preferably arranged at the mouth and forehead, and the probe is arranged forwards, is used for measuring the distance of an object in front of the pet robot, and sends the detected distance value to the processor 1.
The image sensor adopts a wide-angle camera 4, is arranged on the arm part of the pet robot and is connected with the processor 1 through a bus cable, and the wide-angle camera 4 is arranged forwards and is used for acquiring image information in front of the pet robot in real time and sending the image information to the processor 1 for analysis.
The microphone 6 is provided at an ear position, is connected to the processor 1 through a USB terminal, and picks up a human voice and a voice of the surrounding environment, and then transmits a voice signal to the processor 1.
The motion module comprises a plurality of motors 7 which are respectively arranged between the head and the body, between the four limbs and the body and at joints of the four limbs, can drive the head to swing up and down and/or swing left and right relative to the body, and can drive the four limbs to move relative to the body and drive the lower limbs of the four limbs to move relative to the upper limbs. Wherein the motor 7 is a servomotor.
The image output module comprises a display 5, the two displays 5 are respectively positioned at the left eye position and the right eye position of the head, and the emotion of the pet robot is expressed by displaying the actions of the eyes and/or different images.
As shown in fig. 5, several images expressing the emotion of the pet robot and displayed through the display 5 are enumerated.
F101 denotes a display image "sad" of the eye.
F102 denotes the display image "anger" of the eyes.
F103 denotes a display image of the eyes "plain heart".
F104 indicates a display image "heart" of the eyes, which is an expression of a love from the pet robot to people.
F105 indicates that the displayed image of the eye is "surprised".
F106 indicates a displayed image of eyes "blink", which indicates that the pet robot is lost in judgment.
F107 represents a display image "sleep" of the eyes, indicating that the pet robot cannot operate normally.
F108 denotes that the displayed image of the eyes "smile" indicates that the pet robot feels happy.
F109 indicates that the display image of the eye "confused" indicates a hesitant state indicating that no judgment can be made.
F110 denotes a display image of eyes "please touch me" denotes a delicate behavior such as a request for touch, a stroking, or the like.
F111 indicates a display image of the eyes "please speak me", indicating waiting for the owner to give a voice instruction.
The voice output module is a loudspeaker 12 arranged in the body, the loudspeaker 12 is connected with the processor 1 through a cable, and the processor 1 can control the loudspeaker 12 to make sound.
Correspondingly, the processor 1 includes an image processing unit 101, a sound processing unit 102, a distance determination unit 103, a touch determination unit 104, a WIFI, Blue Tooth communication unit 105, an AI knowledge unit 106, and an AI learning unit 107.
The image processing unit 101 analyzes the received image signal, analyzes whether a person is present, and the movement and expression of the person, and then converts the image signal into a corresponding control command to be transmitted to the motion module, the image output module and the voice output module. The image processing unit 101 can grasp the environmental information of the owner and analyze the position information and the facial information of the owner from the signals received from the camera, thereby not only avoiding obstacles and following the movement of the owner, but also determining the physical condition of the owner.
The sound processing unit 102 analyzes the received sound signal, analyzes a sound command of a person, converts the sound command into a corresponding control command, and transmits the control command to the motion module, the image output module and the voice output module.
The distance judgment unit 103 receives the distance signal, is matched with the image processing unit 101, analyzes the position of the obstacle in front of the pet robot and the position of the owner, converts the position into a corresponding instruction, outputs the corresponding instruction to the motion module to avoid the obstacle and enable the pet robot to follow the owner to move.
The touch determination unit 104 receives the touch signal, analyzes and determines a touch position, such as front foot touch, rear foot touch, back touch, and the like, and analyzes the emotion expressed by the host in cooperation with the image processing unit 101 and the sound processing unit 102, and converts the emotion into a corresponding control command, and transmits the control command to the motion module, the image output module, and the voice output module.
WIFI, Blue Tooth communication unit 105 can with intelligent wrist-watch or intelligent bracelet wireless connection, acquire the whether healthy real-time data of the people's sign of wearing intelligent wrist-watch or intelligent bracelet, and judge whether the wearer is healthy through treater 1, for example, monitor wearer's pulse, blood pressure, oxygen concentration and step number etc. when treater 1 judges that the health condition of wearer is unusual, treater 1 can turn into corresponding control command and carry respectively to motion module, image output module and speech output module, and then communicate with the wearer, inform the wearer, and send SMS or email notice predetermined contact person.
The processor 1 further comprises an AI knowledge unit 106, an AI learning unit 107 and a sensor signal recording unit 108, wherein the sensor signal recording unit 108 records information from the data acquisition module 14 and actions taken by the pet robot in time sequence, and provides the information and the actions taken by the pet robot as learning materials to the AI learning unit 107 together with the AI knowledge unit 106 storing basic judgment information for the pet robot to learn, so that the pet robot can judge instructions of the person, obstacle positions and health conditions of the wearer more accurately.
Specifically, the sensor signal recording unit 108 stores information such as judgment of voice command of the owner, judgment of obstacle of surrounding environment, judgment of emotion of the owner, judgment of physical condition of the owner, and the like, and is used for learning by the AI learning unit 107 in cooperation with the AI knowledge unit 106, so that the pet robot can more intelligently and truly reflect the state of the pet.
The AI learning unit 107 can continuously learn the behavior preferred by the owner, recognizing the name given to the robot by the owner and the daily life pattern of the owner. The wireless transceiver module 15 is used for updating the program in the processor 1 in time and inputting the database into the processor 1, and the watch/bracelet worn by the owner is also connected with the WIFI and Blue Tooth communication unit 105 in the processor 1 through the wireless transceiver module 15.
The distance sensor 3, the image sensor, the microphone 6, the touch sensor 11, and the motor 7 are connected to the processor 1 through the I2C signal distribution circuit 2.
The pet robot is also internally provided with a 2I2C signal distribution circuit 8, and the 2I2C signal distribution circuit 8 and the I2C signal distribution circuit 2 are matched to output an I2C control signal to a motor driving circuit 9 for controlling the work of the motor 7.
The emotional communication between the pet robot and the master is expressed by joy and sadness brought by conversation, caress, facial expression and action, and the like, and can be realized by utilizing the action, the sound and the like of the body while being expressed by the eyes of the pet robot, so that the pet robot can bring a stable mood to people, and can also exert a means for replacing the friend-making relationship of people.
The emotion communication method of the pet robot comprises the following steps:
1) detecting the action, expression and sound instruction information of the person through a data acquisition module 14;
2) analyzing the information content detected in the step 1) through a processor 1, and making a corresponding control instruction to a motion module, an image output module and a voice output module;
3) the motion module controls the motion of the head and the four limbs according to the control instruction of the processor 1, the image output module displays the movement of eyes and/or displays different images to express the emotion of the pet robot, and the voice output module simulates and outputs the sound of an animal to express the emotional state of the pet robot.
As shown in fig. 5, S100 represents an emotion expression flow of the pet robot, S101 corresponds to the input and analysis of the detection information of the data acquisition module 14 in step 1) and step 2), and S102-S107 correspond to the emotion expression processing of the pet robot in step 3).
The method comprises the steps that in the step 1), image information in front of the pet robot, voice and environmental sound near the pet robot and touch signals on the surface of the pet robot are collected through the data collection module 14, the collected image information, voice information and touch information are sent to the processor 1, the processor 1 analyzes and judges emotion of a person, and then the pet robot is controlled to make corresponding emotion expression.
The data acquisition module 14 in the step 1) can also acquire distance information between the pet robot and surrounding objects, and the processor 1 in the step 2) analyzes the position of the obstacle in front of the pet robot and the position of the owner according to the image information and the distance information, converts the position into a corresponding instruction, outputs the corresponding instruction to the movement module to avoid the obstacle, and can move along with the owner.
The image output module comprises a display 5, the two displays 5 are respectively positioned at the left eye position and the right eye position of the head, and the emotion of the pet robot is expressed by displaying the actions of the eyes and/or different images.
When the motion module of the pet robot in the step 3) is limited, the processor 1 can output a corresponding help signal to the image output module and/or the voice output module, display the eye picture through the image output module to simulate the help state of the eyes of the pet, and/or send out help sound through the voice output module.
The processor 1 records the information from the data acquisition module 14 and the action taken by the pet robot in time sequence through the sensor signal recording unit 108, and provides the information and the AI knowledge unit 106 storing the basic judgment information as learning materials to the AI learning unit 107 for the pet robot to learn.
The processor 1 calculates and selects emotional expressions preferred by the owner through the owner reaction cases recorded in the sensor signal recording unit 108, and then expresses the emotional expressions preferred by the owner through the motion module, the image output module and the voice output module.
The motion module can simulate the actions of lying, sitting, standing, walking, running, sleeping and rising of the pet robot; the image output module simulates feelings of delighting, joy, anger, sadness, hating, hunger, satiety, heat and cold by displaying different images; the voice output module is capable of simulating barking, singing, and groaning sounds.
The head and the body, the four limbs and the body and joints of the four limbs of the pet robot are connected through the motor 7, the head is driven to swing up and down and/or swing left and right relative to the body through the motor 7, and the four limbs are driven to move relative to the body and the lower limbs of the four limbs are driven to move relative to the upper limbs.
The processes of detecting, analyzing detection information, and controlling emotional expression in the pet robot are continuously repeated at regular time intervals, for example, every 100 ms.
Although preferred embodiments of the present invention have been described in detail hereinabove, it should be clearly understood that modifications and variations of the present invention are possible to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. Emotion communication device of pet robot, its characterized in that: the pet robot comprises a power supply module arranged in the pet robot, and a processor, a data acquisition module, a motion module, an image output module and a voice output module which are respectively connected with the power supply module, wherein the data acquisition module, the motion module, the image output module and the voice output module are respectively and electrically connected with the processor;
the data acquisition module can acquire the distance between the pet robot and a surrounding object, image information in front of the pet robot, human voice and environmental sound near the pet robot and touch signals on the surface of the pet robot and send the acquired distance information, image information, sound information and touch information to the processor;
the pet robot comprises a body, four limbs and a head, wherein the four limbs and the head are movably mounted on the body, the processor receives distance information, image information, sound information and touch information sent by the data acquisition module, then the four limbs and the head of the pet robot are controlled to move through the movement module, an eye picture is displayed through the image output module to simulate emotion changes of the eyes of a pet, and meanwhile, sound is sent out through the voice output module;
when the motion module of the pet machine is limited in motion, the processor can output corresponding help signals to the image output module and/or the voice output module, display the eye pictures through the image output module to simulate the help state of the eyes of the pet, and/or make sounds through the voice output module.
2. The emotion communicator of claim 1, wherein: the data acquisition module comprises a distance sensor, an image sensor, a microphone and a touch sensor;
the distance sensor is arranged at the head of the pet robot, the probe is arranged forwards, and the distance sensor is used for measuring the distance of an object in front of the pet robot and sending the detected distance value to the processor;
the image sensor is arranged at the head of the pet robot, and the probe is arranged forwards and is used for acquiring image information in front of the pet robot in real time and sending the image information to the processor;
the microphone picks up human voice and the voice of the surrounding environment, and then sends a voice signal to the processor;
touch sensor is provided with a plurality ofly, sets up respectively at body, head and four limbs, detects the people and touches pet robot after, the touch sensor that corresponds sends corresponding touch signal to treater.
3. An emotion communicator of a pet robot as claimed in claim 1 or 2, wherein: the processor comprises an image processing unit, a sound processing unit, a distance judging unit and a touch judging unit,
the image processing unit analyzes the received image signals, analyzes whether a person exists and the action and expression of the person, converts the image signals into corresponding control instructions and transmits the control instructions to the motion module, the image output module and the voice output module respectively;
the sound processing unit analyzes the received sound signal, analyzes a sound instruction of a person, converts the sound instruction into a corresponding control instruction and transmits the control instruction to the motion module, the image output module and the voice output module;
the distance judgment unit receives the distance signal, is matched with the image processing unit, analyzes the positions of the obstacle in front of the pet robot and the position of the owner, converts the position into a corresponding instruction, outputs the instruction to the motion module to avoid the obstacle and can follow the owner to move;
the touch judging unit receives the touch signal, analyzes and judges the touch position, analyzes the emotion expressed by the owner by matching with the image processing unit and the sound processing unit, converts the emotion into corresponding control instructions and respectively transmits the control instructions to the motion module, the image output module and the voice output module.
4. The emotion communicator of claim 3, wherein: touch sensor includes sensor body and static detection circuitry, static detection circuitry and the cooperation of sensor body detect pet robot surface's electrostatic current's change, judge whether someone touches pet robot and touches the position, then output corresponding touch signal to processing unit.
5. The emotion communicator of claim 2, wherein: the motion module comprises a plurality of motors which are respectively arranged between the head and the body, between the four limbs and the body and at joints of the four limbs, can drive the head to swing up and down and/or swing left and right relative to the body, and drive the four limbs to move relative to the body and drive the lower limbs of the four limbs to move relative to the upper limbs.
6. The emotion communicator of claim 1, wherein: the image output module comprises two displays which are respectively positioned at the left eye position and the right eye position of the head, and the emotion of the pet robot is expressed by displaying the actions of the eyes and/or different images.
7. The emotion communicator of claim 3, wherein: the treater includes WIFI, Blue Tooth communication unit, should WIFI, Blue Tooth communication unit can with intelligent wrist-watch or intelligent bracelet wireless connection, acquire the whether healthy real-time data of the people's sign of wearing intelligent wrist-watch or intelligent bracelet to judge through the treater whether the person of wearing is healthy, when the treater judges the health status of the person of wearing is unusual, the treater can turn into corresponding control command and carry respectively to motion module, image output module and speech output module, and then exchange with the person of wearing, and send SMS or email notice predetermined contact person.
8. The emotion communicator of claim 7, wherein: the processor further comprises an AI knowledge unit, an AI learning unit and a sensor signal recording unit, wherein the sensor signal recording unit records information from the data acquisition module and actions taken by the pet robot according to a time sequence, and the information and the actions taken by the pet robot are provided to the AI learning unit as learning materials together with the AI knowledge unit storing basic judgment information for the pet robot to learn, so that the pet robot can judge the instruction of the person, the position of the obstacle and the health condition of the wearer more accurately.
9. The emotion communicator of claim 5, wherein: the distance sensor, image sensor, microphone, touch sensor and motor are all connected to the processor through an I2C signal distribution circuit.
10. The emotion communicator of claim 9, wherein: the pet robot is also internally provided with a 2I2C signal distribution circuit, and the 2I2C signal distribution circuit and the I2C signal distribution circuit are matched to output an I2C control signal to a motor driving circuit for controlling the work of a motor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011466094.3A CN112536808A (en) | 2020-12-14 | 2020-12-14 | Emotion communication device of pet robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011466094.3A CN112536808A (en) | 2020-12-14 | 2020-12-14 | Emotion communication device of pet robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112536808A true CN112536808A (en) | 2021-03-23 |
Family
ID=75018536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011466094.3A Pending CN112536808A (en) | 2020-12-14 | 2020-12-14 | Emotion communication device of pet robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112536808A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202715243U (en) * | 2012-07-09 | 2013-02-06 | 桂林电子科技大学 | Walking mechanism of mechanical quadruped animal |
KR20130104076A (en) * | 2012-03-12 | 2013-09-25 | 최광호 | Method for breeding a pet robot using a mobile terminal |
CN106671105A (en) * | 2017-01-17 | 2017-05-17 | 五邑大学 | Intelligent accompanying robot for old people |
CN107116563A (en) * | 2017-06-22 | 2017-09-01 | 国家康复辅具研究中心 | Pet type robot and robot control system |
CN107962573A (en) * | 2016-10-20 | 2018-04-27 | 富泰华工业(深圳)有限公司 | Accompany humanoid robot and robot control method |
CN109434827A (en) * | 2018-09-30 | 2019-03-08 | 深圳市旭展通达科技有限公司 | Accompany robot control method, system, mobile terminal and storage medium |
CN109719738A (en) * | 2017-10-30 | 2019-05-07 | 索尼公司 | Information processing unit, information processing method and program |
US20200094398A1 (en) * | 2018-09-20 | 2020-03-26 | Sony Corporation | Situation-aware robot |
-
2020
- 2020-12-14 CN CN202011466094.3A patent/CN112536808A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130104076A (en) * | 2012-03-12 | 2013-09-25 | 최광호 | Method for breeding a pet robot using a mobile terminal |
CN202715243U (en) * | 2012-07-09 | 2013-02-06 | 桂林电子科技大学 | Walking mechanism of mechanical quadruped animal |
CN107962573A (en) * | 2016-10-20 | 2018-04-27 | 富泰华工业(深圳)有限公司 | Accompany humanoid robot and robot control method |
CN106671105A (en) * | 2017-01-17 | 2017-05-17 | 五邑大学 | Intelligent accompanying robot for old people |
CN107116563A (en) * | 2017-06-22 | 2017-09-01 | 国家康复辅具研究中心 | Pet type robot and robot control system |
CN109719738A (en) * | 2017-10-30 | 2019-05-07 | 索尼公司 | Information processing unit, information processing method and program |
US20200094398A1 (en) * | 2018-09-20 | 2020-03-26 | Sony Corporation | Situation-aware robot |
CN109434827A (en) * | 2018-09-30 | 2019-03-08 | 深圳市旭展通达科技有限公司 | Accompany robot control method, system, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8909370B2 (en) | Interactive systems employing robotic companions | |
US20190108770A1 (en) | System and method of pervasive developmental disorder interventions | |
JP7400923B2 (en) | Information processing device and information processing method | |
EP3456487A2 (en) | Robot, method of controlling the same, and program | |
US20230266767A1 (en) | Information processing apparatus, information processing method, and program | |
WO2019087495A1 (en) | Information processing device, information processing method, and program | |
JP7559900B2 (en) | Information processing device, information processing method, and program | |
KR20060079832A (en) | Humanoid robot using emotion expression based on the embedded system | |
JPWO2020116233A1 (en) | Information processing equipment, information processing methods, and programs | |
US20230195401A1 (en) | Information processing apparatus and information processing method | |
US11938625B2 (en) | Information processing apparatus, information processing method, and program | |
CN112571433A (en) | Emotion communication method of pet robot | |
US11998066B2 (en) | Kigurumi staging support apparatus, kigurumi staging support system, and kigurumi staging support method | |
CN112536808A (en) | Emotion communication device of pet robot | |
US20200333800A1 (en) | Animal-shaped autonomous mobile object, method of causing animal-shaped autonomous mobile object to perform motion, and program | |
KR102350752B1 (en) | Caring device using robot | |
Naeem et al. | An AI based Voice Controlled Humanoid Robot | |
Teyssier | Anthropomorphic devices for affective touch communication | |
CN117961947B (en) | Bionic robot arm with multiple sensing and feedback stimulation functions | |
US12122039B2 (en) | Information processing device and information processing method | |
KR200386045Y1 (en) | Humanoid Robot using Emotion Expression based on the Embedded System | |
JP7442341B2 (en) | Robot, robot control program and robot control method | |
Ionescu et al. | Voice reactive biomimetic structure | |
KR20220089119A (en) | Growth Management service application, device and the way to intimacy or life partner used a robot | |
US20210197393A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210323 |
|
RJ01 | Rejection of invention patent application after publication |