CN113878595B - Humanoid entity robot system based on raspberry group - Google Patents

Humanoid entity robot system based on raspberry group Download PDF

Info

Publication number
CN113878595B
CN113878595B CN202111257216.2A CN202111257216A CN113878595B CN 113878595 B CN113878595 B CN 113878595B CN 202111257216 A CN202111257216 A CN 202111257216A CN 113878595 B CN113878595 B CN 113878595B
Authority
CN
China
Prior art keywords
robot
data
cloud
unit
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111257216.2A
Other languages
Chinese (zh)
Other versions
CN113878595A (en
Inventor
李龙
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qingbao Engine Robot Co ltd
Original Assignee
Shanghai Qingbao Engine Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qingbao Engine Robot Co ltd filed Critical Shanghai Qingbao Engine Robot Co ltd
Priority to CN202111257216.2A priority Critical patent/CN113878595B/en
Publication of CN113878595A publication Critical patent/CN113878595A/en
Application granted granted Critical
Publication of CN113878595B publication Critical patent/CN113878595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0006Exoskeletons, i.e. resembling a human figure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The invention relates to a humanoid entity robot system based on raspberry group, comprising: the voice recognition unit is used for acquiring voice data and recognizing the acquired voice data to obtain a corresponding voice recognition result; the image recognition unit is used for acquiring image data and recognizing the acquired image data to obtain a corresponding image recognition result; the deep learning unit is used for obtaining a deep learning feedback result according to the voice recognition result and/or the image recognition result; the AI twin unit is used for acquiring the motion video data of the person and processing the acquired motion video data of the person to obtain a corresponding motion parameter result; and the processing unit is used for forming a corresponding feedback instruction according to the deep learning feedback result and controlling the entity robot to execute, and is used for forming a corresponding motion instruction according to the motion parameter result and controlling the entity robot to execute. The system of the invention makes up for the system design gap in the field of raspberry type humanoid physical robots.

Description

Humanoid entity robot system based on raspberry group
Technical Field
The invention relates to the technical field of robots, in particular to a raspberry pie-based humanoid robot system.
Background
A robot designed and manufactured to simulate the shape and behavior of a human is a humanoid robot, and generally has limbs and a head of a human, respectively or simultaneously. Robots are generally designed into different shapes according to different application requirements, such as mechanical arms, wheelchair robots, walking robots and the like applied to industry. The humanoid robot research integrates multiple sciences such as machinery, electronics, computers, materials, sensors, control technologies and the like, and represents a high-tech development level of the country. From the current research situation of robot technology and artificial intelligence, a humanoid robot with high intelligence and high flexibility has a long way to walk, and human beings do not know the humanoid robot completely, so that the development of the humanoid robot is limited.
The humanoid robot has human appearance, can adapt to human living and working environments, replaces human beings to finish various operations, can expand the human abilities in many aspects, and is widely applied to various fields such as service, medical treatment, education, entertainment and the like.
Raspberry pie (Raspberry Pi) is designed for learning computer programming education, being a credit card sized microcomputer. The robot is small in size and has the basic functions of all PCs, if a system for the robot is designed based on raspberry group, the robot can be correspondingly small and light, and no system of a humanoid entity robot based on raspberry group exists in the market at present. Therefore, a system of a humanoid robot based on raspberry pi is needed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a raspberry-pie-based humanoid solid robot system, which solves the problem that the conventional raspberry-pie-based humanoid solid robot is not used for a while so as to make up for the technical gap.
The technical scheme for realizing the purpose is as follows:
the invention provides a raspberry pi-based humanoid entity robot system, which comprises:
the voice recognition unit is used for acquiring voice data and recognizing the acquired voice data to obtain a corresponding voice recognition result;
the image identification unit is used for acquiring image data and identifying the acquired image data to obtain a corresponding image identification result;
the deep learning unit is connected with the voice recognition unit and the image recognition unit and is used for obtaining a deep learning feedback result according to the voice recognition result obtained by the voice recognition unit and/or the image recognition result obtained by the image recognition unit;
the AI twin unit is used for acquiring the motion video data of the person and processing the acquired motion video data of the person to obtain a corresponding motion parameter result; and
and the processing unit is connected with the voice recognition unit, the image recognition unit, the deep learning unit and the AI twin unit and is also connected with the entity robot in a control manner, and the processing unit is used for forming a corresponding feedback instruction according to a deep learning feedback result of the deep learning unit and controlling the entity robot to execute the feedback instruction, and is also used for forming a corresponding movement instruction according to a movement parameter result of the AI twin unit and controlling the entity robot to execute the movement instruction.
The system disclosed by the invention is designed based on the raspberry pi, can realize the control of the humanoid entity robot, enables the humanoid entity robot to perform corresponding actions like a real person, and is suitable for the fields of service, medical treatment, education, entertainment and the like. And the system of the invention is also provided with an AI twin unit, so that the physical robot can imitate the human behavior, and the action of the physical robot is synchronous with the action of a real human. The system based on the raspberry pi design makes up the system design blank in the field of the raspberry pi humanoid robot.
The man-like entity robot system based on the raspberry pie is further improved in that the AI twin unit comprises a voice twin module, and the voice twin module is used for collecting real-person remote voice in real time to form real-person remote voice data and processing the real-person remote voice data to form a voice twin result;
the processing unit is also used for forming a corresponding speaking instruction according to the voice twin result and controlling the entity robot to execute the speaking instruction.
The human-simulated entity robot system based on the raspberry pi is further improved in that the human-simulated entity robot system further comprises an emotion interaction unit which is connected with the voice recognition unit, the image recognition unit and the processing unit;
the emotion interaction unit is used for receiving the voice data acquired by the voice recognition unit and the image data acquired by the image recognition unit and performing emotion analysis on the voice data and the image data to obtain an emotion analysis result;
and the processing unit forms a corresponding emotion instruction according to the emotion analysis result and controls the entity robot to execute the emotion instruction.
The raspberry pie-based humanoid entity robot system is further improved in that the system further comprises a cloud brain system arranged at the cloud end;
the cloud brain system comprises a voice recognition cloud model, an image recognition cloud model, a deep learning cloud model and a virtual twin cloud model;
the processing unit is in communication connection with the cloud-brain system and is used for sending the voice data acquired by the voice recognition unit to the cloud-brain system, sending the image data acquired by the image recognition unit to the cloud-brain system, sending the motion video data of the person acquired by the AI twin unit to the cloud-brain system and receiving a recognition result of corresponding data sent by the cloud-brain system.
The raspberry pie-based humanoid entity robot system is further improved in that the cloud brain system further comprises an emotion recognition cloud model;
the cloud brain system is further used for inputting the received voice data and image data into the emotion recognition cloud model, and obtaining emotion analysis results output by the emotion recognition cloud model and feeding back the emotion analysis results to the processing unit.
The raspberry pi-based humanoid entity robot system is further improved in that the cloud brain system is also used for retraining the voice recognition cloud model according to the received voice data.
The human-simulated entity robot system based on the raspberry pi is further improved in that the cloud brain system is further used for retraining the image recognition cloud model according to the received image data.
The human-simulated entity robot system based on the raspberry pi is further improved in that the cloud brain system is further used for retraining the deep learning cloud model according to the received voice data and image data.
The raspberry pi-based humanoid entity robot system is further improved in that the cloud brain system is also used for retraining the virtual twin cloud model according to the received motion video data of the human.
The human-simulated entity robot system based on the raspberry pi is further improved in that the human-simulated entity robot system further comprises a robot action library and a robot expression library;
the robot action library stores action names and action instructions of the robots;
the robot expression library stores the expression name and the expression instruction of the robot;
the processing unit is connected with the robot action library and the robot expression library and used for searching and obtaining corresponding action instructions and expression instructions according to the deep learning feedback result and controlling the entity robot to execute.
Drawings
FIG. 1 is a system diagram of a raspberry pi-based humanoid physical robot system of the present invention.
Fig. 2 is an architecture diagram of a raspberry pi-based humanoid robot system of the present invention.
FIG. 3 is a flow chart of entity twinning performed by an entity robot in the raspberry pi based humanoid entity robot system according to the present invention.
FIG. 4 is a flow chart of a 3D virtual human twin in the raspberry pi based humanoid robot system of the present invention.
FIG. 5 is a flow chart of voice twinning in the raspberry pi based humanoid robot system of the present invention.
FIG. 6 is a flowchart of emotional interaction in the raspberry pi based humanoid entity robot system of the present invention.
Fig. 7 is a flowchart of speech recognition in the cloud-brain system of the raspberry pi-based humanoid robot system according to the present invention.
Fig. 8 is a flowchart of image recognition in the cloud brain system of the raspberry pi-based humanoid robot system of the present invention.
Fig. 9 is a flowchart of deep learning in the cloud-brain system of the raspberry pi-based humanoid robot system of the present invention.
Fig. 10 is a flow chart of a virtual twin in a cloud brain system of the raspberry pi based humanoid robot system of the present invention.
Fig. 11 is a flowchart of emotion recognition in the cloud brain system of the raspberry pi-based humanoid robot system according to the present invention.
Fig. 12 is a schematic structural diagram of hardware components of a physical robot applied to the raspberry pi-based humanoid physical robot system of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
Referring to fig. 1, the invention provides a raspberry-based humanoid robot system, which is used for solving the problem that in the prior art, a raspberry-based humanoid robot system temporarily does not exist, and fills up the technical blank of system design in the field of raspberry-based humanoid robots. The invention relates to a raspberry pi-based humanoid physical robot system, which is described in the following with reference to the accompanying drawings.
Referring to fig. 1, a system diagram of the raspberry pi based humanoid robot system of the present invention is shown. The humanoid robot system based on raspberry pi of the present invention is explained below with reference to fig. 1.
As shown in fig. 1, the raspberry pi-based humanoid robot system of the present invention includes a voice recognition unit 21, an image recognition unit 22, a deep learning unit 23, an AI twin unit 24, and a processing unit 25, wherein the voice recognition unit 21 and the image recognition unit 22 are both connected to the deep learning unit 23, the processing unit 25 is connected to the voice recognition unit 21, the image recognition unit 22, the deep learning unit 23, and the AI twin unit 24, and the processing unit 25 is further connected to a physical robot control for controlling various actions and behaviors of the physical robot.
The voice recognition unit 21 is configured to acquire voice data and recognize the acquired voice data to obtain a corresponding voice recognition result; preferably, the voice recognition unit 21 may be connected to a microphone, and the acquisition of the voice data is realized by recording sound by the microphone, the microphone is disposed on the physical robot, and the microphone acquires sound around the physical robot in real time and sends the acquired voice data to the voice recognition unit 21. The entity robot can be enabled to carry out voice interaction through the arranged voice recognition unit 21.
The image recognition unit 22 is used for acquiring image data and recognizing the acquired image data to obtain a corresponding image recognition result; preferably, the image recognition unit 22 may be connected to a camera, and the camera is arranged on glasses of the physical robot and captures an object seen by the physical robot in real time to form image data, and sends the obtained image data to the image recognition unit 22. The image recognition unit 22 can make the physical robot see the surrounding objects and react to the objects accordingly.
The deep learning unit 23 is configured to obtain a deep learning feedback result according to the speech recognition result obtained by the speech recognition unit and/or the image recognition result obtained by the image recognition unit; the deep learning unit forms a deep learning model based on deep learning, obtains a voice feedback result by inputting a voice recognition result into the deep learning model, obtains an image feedback result by inputting an image recognition result into the deep learning model, and realizes that the robot feeds back the voice recognition result and/or the image recognition result by using the deep learning unit 23.
The AI twin unit 24 is configured to obtain motion video data of a person and process the obtained motion video data of the person to obtain a corresponding motion parameter result; the AI twin unit 24 may implement a real-time action simulation based on the human motion video data.
And the processing unit 25 is used for forming a corresponding feedback instruction according to the deep learning feedback result of the deep learning unit 23 and controlling the entity robot to execute the feedback instruction, and is also used for forming a corresponding motion instruction according to the motion parameter result of the AI twin unit 24 and controlling the entity robot to execute the motion instruction.
In an embodiment of the present invention, as shown in fig. 2, the system architecture of the present invention is divided into five layers, which are a hardware entity layer, an OS layer, a middleware layer, an application layer, and a cloud-brain system, respectively, where the hardware entity layer includes each hardware entity of the physical robot, and with reference to fig. 12, includes a brain, a head, a waist, legs, feet, hands, and arms, and the hardware entities of the physical robot are controlled by a hardware board, so as to realize precise motion control of each part. The OS layer employs a raspberry pi operating system. The middleware layer comprises a cloud brain middleware, an action library, an expression library and a voice recognition library. The application layer comprises a voice interaction system, an image recognition system, a deep learning system, an AI twin system and an emotion interaction system. The cloud brain system of the robot comprises a voice recognition model system, an emotion model system, a virtual twin model system and a deep learning model system.
The voice recognition unit 21, the image recognition unit 22, the deep learning unit 23, the AI twin unit 24 and the processing unit 25 of the system of the present invention are provided in the application layer of the system architecture, which is equivalent to the installation of an application program on the operating system.
In a specific embodiment of the present invention, the AI twin unit 25 includes a voice twin module, configured to collect a real-person remote voice in real time to form real-person remote voice data, and process the real-person remote voice data to form a voice twin result;
the processing unit 25 is further configured to form a corresponding speaking instruction according to the voice twin result and control the entity robot to execute the speaking instruction.
Specifically, please refer to fig. 5, the AI twin unit realizes the voice twin function of the robot, that is, the physical robot can perform real-time voice output and lip correspondence according to the real-time voice of the remote real person. Firstly, when the real person remotely phonates, the external equipment collects the real person phonetics, then audio frequency simple processing such as filtering and impurity removing processing is carried out, the processed data are transmitted to a robot cloud brain system, the cloud brain system can carry out relevant audio frequency beautifying processing, sound changing processing, 3D surrounding processing, background sound processing and the like, the processed cloud brain transmits results to corresponding entity robots, a robot application layer receives the data and further carries out voice data processing and lip corresponding processing on the result information according to current entity robot hardware information, and finally the audio data are played through a loudspeaker and output through corresponding lip actions of a mouth.
Further, as shown in fig. 3 and 4, the AI twin unit of the present invention can realize real-person entity twin and 3D image virtual twin, where the real-person entity twin means that an entity robot can perform real-time motion simulation according to real-time motion of a real person, first, when the real person moves, an external image acquisition device acquires real-person motion, then performs image skeletonization processing, then transmits the processed data to a robot brain system, and the brain system performs related data filtering processing, so that the robot motion is smoother, and after the processing, the brain transmits a result to a corresponding entity robot, and a robot application layer receives the data and further performs data processing on the result information according to current entity robot hardware information, and finally distributes the data to each part of the robot to synchronize the robot motion with the real person motion. The 3D image virtual twin, namely the solid robot can perform real-time action simulation according to the real-time action of the 3D virtual human. The method comprises the steps that firstly, when a 3D virtual human body moves, processed data are transmitted to a cloud brain system of the robot, the cloud brain system can perform related data filtering processing, so that the motion of the robot is smoother, the processed cloud brain transmits results to corresponding entity robots, a robot application layer receives the data and further performs data processing on the result information according to current entity robot hardware information, and finally the data are distributed to all parts of the robot, so that the motion of the robot is synchronous with the motion of the 3D virtual human.
In a specific embodiment of the present invention, the system of the present invention further comprises an emotion interaction unit, connected to the voice recognition unit, the image recognition unit and the processing unit;
the emotion interaction unit is used for receiving the voice data acquired by the voice recognition unit and the image data acquired by the image recognition unit and performing emotion analysis on the voice data and the image data to obtain an emotion analysis result;
and the processing unit forms a corresponding emotion instruction according to the emotion analysis result and controls the entity robot to execute the emotion instruction.
The robot can output corresponding tone, voice, content, expression and action according to the result through the emotion interaction unit.
In a specific embodiment of the present invention, the system of the present invention further comprises a cloud computer system disposed in the cloud;
the cloud brain system comprises a voice recognition cloud model, an image recognition cloud model, a deep learning cloud model and a virtual twin cloud model;
the processing unit is in communication connection with the cloud brain system and is used for sending the voice data acquired by the voice recognition unit to the cloud brain system, sending the image data acquired by the image recognition unit to the cloud brain system, sending the motion video data of the person acquired by the AI twin unit to the cloud brain system and receiving the recognition result of the corresponding data sent by the cloud brain system.
The cloud brain system belongs to a one-to-many system, can receive data and requirements of all entity robots under the line, and further feeds back results, and has stronger computing capacity, analysis thrust capacity and AI deep learning capacity.
Further, the cloud brain system also comprises an emotion recognition cloud model;
the cloud brain system is also used for inputting the received voice data and image data into the emotion recognition cloud model, and acquiring emotion analysis results output by the emotion recognition cloud model and feeding back the emotion analysis results to the processing unit.
Preferably, as shown in fig. 6, the emotion interaction unit performs emotion analysis by using an emotion recognition cloud model in the cloud-brain system, specifically, after receiving input of voice data and image data, the data is transmitted to the emotion recognition cloud model of the cloud-brain system for analysis, then the cloud-brain system returns a result to the processing unit, and the processing unit performs corresponding output of corresponding tone, voice, content, expression and action according to the result.
In a preferred embodiment, the speech recognition unit can perform speech recognition offline and online, wherein the speech recognition offline is implemented by a speech recognition model pre-stored in the physical robot system, and the speech recognition online is implemented by a speech recognition cloud model in the cloud-to-computer system. As shown in fig. 7, after receiving the voice data sent by the entity robot, the cloud computing system performs voice recognition on the cloud computing line if the current task is not a timing training task, and then outputs and transmits a recognition result to the corresponding entity robot.
Further, the cloud-brain system is also used for retraining the voice recognition cloud model according to the received voice data. The cloud brain system also stores the received voice data when receiving the voice data, and if the current voice data belongs to a timing training task, then uses new data (namely the stored voice data) to perform timing centralized training by using a voice recognition model, then outputs the latest weight parameters of the model, stops training if the result reaches the standard, and continues training until the result reaches the standard if the result does not reach the standard.
In another preferred embodiment, the image recognition unit can perform off-line recognition and on-line recognition, wherein the off-line recognition is performed by a recognition model inside the physical robot, and the on-line recognition is performed by an image recognition cloud model in a cloud-brain system. Referring to fig. 8, the cloud-brain system receives new data input of all images of the offline robot, performs image recognition on the cloud-brain line if the current task is not a timing training task, and outputs and transmits a recognition result to the corresponding entity robot.
Further, the cloud-brain system is also used for retraining the image recognition cloud model according to the received image data. And when the image data is received by the cloud brain system, storing the image data, if the image data belongs to a timed training task at present, then using the stored new data to perform timed centralized training by using an image recognition model, then outputting the latest weight parameters of the model, stopping training if the result reaches the standard, and continuing training until the result reaches the standard if the result does not reach the standard.
In another preferred embodiment, the deep learning unit can perform offline learning and online learning, wherein the offline learning is realized by a deep learning model inside the physical robot, and the online learning is realized by a deep learning cloud model in a cloud-brain system. With reference to fig. 9, the cloud-brain system receives new data inputs of all audio and images of the offline robot, performs deep learning on the cloud-brain line if the current task is not a timing training task, and then outputs and transmits the recognition result to the corresponding entity robot.
Further, the cloud-brain system is further configured to retrain the deep-learning cloud model based on the received voice data and image data. And the cloud brain system stores the received voice data and image data, performs timed centralized training by using a deep learning model by using stored new data if the received voice data and image data belong to a timed training task at present, outputs the latest weight parameters of the model, stops learning if the result reaches the standard, and continues learning until the result reaches the standard.
In yet another preferred embodiment, as shown in fig. 10, the AI twin unit is action-simulated by means of a virtual twin cloud model of a cloud-brain system.
Further, the cloud brain system is further configured to retrain the virtual twin cloud model from the received motion video data of the person. And the cloud brain system receives new data input of the offline real person or the 3D virtual person, stores the data, then performs data filtering processing, and outputs the new data to a data storage area for other models to perform timing centralized training.
In another preferred embodiment, as shown in fig. 11, emotion analysis can be implemented by using an emotion recognition cloud model of a cloud-brain system, wherein the cloud-brain system receives new data input of all audio images of the off-line robot, and if the current task is not a timing training task, performs emotion recognition on the cloud-brain line, and then outputs and transmits the recognition result to the corresponding physical robot; if the current task belongs to a timing training task, then, new data is used for carrying out timing centralized training by using an emotion model, then, the latest weight parameter of the model is output, the training is stopped if the result reaches the standard, and the training is continued if the result does not reach the standard until the result reaches the standard.
In a specific embodiment of the present invention, the system of the present invention further comprises a robot action library and a robot expression library; the robot action library stores the action name and the action command of the robot; the robot expression library stores the expression name and the expression instruction of the robot; the processing unit is connected with the robot action library and the robot expression library and used for searching and obtaining corresponding action instructions and expression instructions according to the deep learning feedback result and controlling the entity robot to execute.
The cloud brain middleware of the middleware layer has two functions, one is a process that the robot application layer requests the cloud brain, and the other is a process that the robot receives cloud brain push. The robot application layer requests information from the cloud brain, and the robot middleware packages and uploads a protocol to the cloud brain; the information pushing process of the cloud computer comprises the steps that after the cloud computer transmits back processing information or actively pushes related information, the robot receives the cloud computer information, analyzes the information by using the cloud computer middleware and outputs the information to the application layer for processing.
And the robot application layer sends an action request, then the action library carries out action classification and action algorithm rotation on the request, and then the result is output to the OS hardware abstraction layer for processing.
The robot application layer outputs voice or image information, then the expression library carries out reasoning and corresponding algorithm selection on the information, and then the result is output to the OS hardware abstraction layer for processing.
The robot application layer sends out a voice recognition request, the voice recognition library judges whether local recognition is available or not, and if the local recognition is available, the off-line recognition is carried out and a result is output; and if not, uploading the source voice information to the cloud brain, identifying the cloud brain, and transmitting the result back to the corresponding robot after the cloud brain is identified.
The OS layer adopts a Raspberry Pi OS, the system is a Linux system, the Debian-based software is specially adapted to the hardware of the Raspberry Pi, and the system comprises process scheduling, storage management, CPU and process management, a file system, device management and drive, network communication, system initialization (guidance), system calling and the like.
The control of the robot is realized by the following processes:
after external voice is input, the robot brain receives and analyzes processing in an application layer, if local recognition processing can be achieved, data are analyzed by the local middleware, if cloud brain processing is needed, the data are transmitted to the cloud brain, a processing result is transmitted back to the off-line entity robot after the cloud brain processing is completed, the off-line entity robot middleware analyzes and transmits the data to the OS layer hardware abstraction module to be processed, and finally hardware motion of each part of the robot entity is controlled and controlled.
The head of the physical robot includes eyebrows, eyelids, eyeballs, mouth, tongue, cheeks, and chin.
Each part of the head is controlled by a miniature steering engine, and the rich facial expression of the head is shown by adjusting the running angle and speed of each steering engine. After external voice or image information is input, the robot brain receives and analyzes the data in an application layer, if the data can be locally identified and processed, the data is analyzed by a local middleware, if the data needs to be processed by a cloud brain, the data is transmitted to the cloud brain, a processing result is returned to the off-line entity robot after the cloud brain is processed, the off-line entity robot middleware analyzes the data and downloads the data to an OS (operating system) layer hardware abstraction module for processing, and finally hardware movement of each part of the robot head entity is controlled.
The waist of the entity robot comprises two large-torque motors, the left and right waist twisting and the forward and backward leaning actions of the robot can be carried out, and more human body actions can be carried out by the robot by adjusting the angles and speed thresholds of the two motors. After external voice or image information is input, the robot brain receives and analyzes the data in an application layer, if the data can be locally identified and processed, the data is analyzed by a local middleware, if the data needs to be processed by a cloud brain, the data is transmitted to the cloud brain, a processing result is returned to the off-line entity robot after the cloud brain is processed, the off-line entity robot middleware analyzes the data and downloads the data to an OS (operating system) layer hardware abstraction module for processing, and finally hardware movement of each part of the robot waist entity is controlled.
The arms of the physical robot are parts of shoulders, upper arms and lower arms, and the action control of the arms is realized by adopting a large-torque motor respectively. Through the cooperative adjustment of the motors of the shoulders, the upper arms and the lower arms, the arms of the robot can flexibly move like a real person. After external voice or image information is input, the robot brain receives and analyzes and processes the external voice or image information in an application layer, if the external voice or image information can be identified and processed locally, the local middleware analyzes data, if the cloud brain processing is needed, the data is transmitted to the cloud brain, a processing result is transmitted back to the offline entity robot after the cloud brain processing is finished, the offline entity robot middleware analyzes and transmits the data to the OS layer hardware abstraction module to process, and finally hardware motion of each part of the robot arm body is controlled and controlled.
The hand of the entity robot is divided into a wrist, a palm and fingers, and the action control of the hand is realized by adopting a miniature steering engine respectively. Thereby let robot hand can realize nimble action through the coordinated adjustment to wrist, palm and finger steering wheel, can load in a flexible way simultaneously and snatch 1 kg's article. After external voice or image information is input, the robot brain receives and analyzes the data in an application layer, if the data can be locally identified and processed, the data is analyzed by a local middleware, if the data needs to be processed by a cloud brain, the data is transmitted to the cloud brain, a processing result is transmitted back to the off-line entity robot after the cloud brain is processed, the off-line entity robot middleware analyzes the data and transmits the data to an OS (operating system) layer hardware abstraction module for processing, and finally hardware movement of each part of an entity of the robot hand is controlled.
The legs of the solid robot are divided into thighs and shanks, and the action control of the legs is achieved through large-torque steering engines respectively. Thereby can let the robot shank realize actions such as squatting through the cooperative adjustment to the steering wheel of thigh and shank. After external voice or image information is input, the robot brain receives and analyzes and processes the data in an application layer, if local recognition processing can be achieved, the local middleware analyzes the data, if cloud brain processing is needed, the data are transmitted to the cloud brain, a processing result is transmitted back to the offline entity robot after the cloud brain processing is completed, the offline entity robot middleware analyzes and transmits the data to an OS (operating system) layer hardware abstraction module for processing, and finally hardware motion of all parts of the robot leg entity is controlled and controlled.
The feet of the entity robot move on the flat ground in a wheel type sliding mode, and the entity robot has the functions of fixed-point cruise, timing cruise, path planning and the like. After external voice or image information is input, the robot brain receives and analyzes and processes the external voice or image information in an application layer, if the external voice or image information can be identified and processed locally, the local middleware analyzes data, if the cloud brain processing is needed, the data is transmitted to the cloud brain, a processing result is transmitted back to the offline entity robot after the cloud brain processing is finished, the offline entity robot middleware analyzes and transmits the data to the OS layer hardware abstraction module to process, and finally hardware motion of each part of the robot foot entity is controlled and controlled.
While the present invention has been described in detail and with reference to the embodiments thereof as shown in the accompanying drawings, it will be apparent to one skilled in the art that various changes and modifications can be made therein. Therefore, certain details of the embodiments should not be construed as limitations of the invention, except insofar as the following claims are interpreted to cover the invention.

Claims (9)

1. A humanoid entity robot system based on raspberry group, characterized by comprising:
the voice recognition unit is used for acquiring voice data and recognizing the acquired voice data to obtain a corresponding voice recognition result;
the image identification unit is used for acquiring image data and identifying the acquired image data to obtain a corresponding image identification result;
the deep learning unit is connected with the voice recognition unit and the image recognition unit and is used for obtaining a deep learning feedback result according to the voice recognition result obtained by the voice recognition unit and/or the image recognition result obtained by the image recognition unit;
the AI twin unit is used for acquiring the motion video data of the person and processing the acquired motion video data of the person to obtain a corresponding motion parameter result; and
the processing unit is connected with the voice recognition unit, the image recognition unit, the deep learning unit and the AI twin unit and is also connected with the entity robot in a control way, and the processing unit is used for forming a corresponding feedback instruction according to a deep learning feedback result of the deep learning unit and controlling the entity robot to execute the feedback instruction, and is also used for forming a corresponding motion instruction according to a motion parameter result of the AI twin unit and controlling the entity robot to execute the motion instruction;
the AI twin unit can realize real-person entity twin and 3D image virtual twin, wherein the real-person entity twin refers to real-time action simulation of an entity robot according to real-time actions of a real person, firstly, when the real person moves, an external image acquisition device acquires the actions of the real person, then image skeletonization processing is carried out, processed data are transmitted to a robot cloud brain system, the cloud brain system can carry out related data filtering processing, so that the actions of the robot are smoother, the processed cloud brain transmits results to a corresponding entity robot, a robot application layer receives the data and further carries out data processing on the result information according to current entity robot hardware information, and finally, the data are distributed to each part of the robot to synchronize the actions of the robot and the actions of the real person; the method comprises the following steps that a 3D image virtual twin, namely an entity robot can perform real-time action simulation according to real-time actions of a 3D virtual human, firstly, when the 3D virtual human body moves, processed data are transmitted to a cloud brain system of the robot, the cloud brain system can perform related data filtering processing, so that the actions of the robot are smoother, the processed cloud brain transmits results to corresponding entity robots, a robot application layer receives the data and further performs data processing on the result information according to current entity robot hardware information, and finally, the data are distributed to all parts of the robot to enable the actions of the robot and the actions of the 3D virtual human to be synchronous;
the AI twin unit comprises a voice twin module and is used for collecting real-person remote voice in real time to form real-person remote voice data and processing the real-person remote voice data to form a voice twin result;
the processing unit is also used for forming a corresponding speaking instruction according to the voice twin result and controlling the entity robot to execute the speaking instruction, the entity robot application layer receives data, carries out voice data processing and lip corresponding processing on the result information according to the current entity robot hardware information, and finally outputs the audio data through the loudspeaker and the corresponding lip action of the mouth.
2. The raspberry pi based humanoid robot system of claim 1, further comprising an emotion interaction unit connected to said voice recognition unit, said image recognition unit and said processing unit;
the emotion interaction unit is used for receiving the voice data acquired by the voice recognition unit and the image data acquired by the image recognition unit and performing emotion analysis on the voice data and the image data to obtain an emotion analysis result;
and the processing unit forms a corresponding emotion instruction according to the emotion analysis result and controls the entity robot to execute the emotion instruction.
3. The raspberry-based humanoid robotic system of claim 1, further comprising a cloud-brain system provided in the cloud;
the cloud brain system comprises a voice recognition cloud model, an image recognition cloud model, a deep learning cloud model and a virtual twin cloud model;
the processing unit is in communication connection with the cloud-brain system and is used for sending the voice data acquired by the voice recognition unit to the cloud-brain system, sending the image data acquired by the image recognition unit to the cloud-brain system, sending the motion video data of the person acquired by the AI twin unit to the cloud-brain system and receiving a recognition result of corresponding data sent by the cloud-brain system.
4. The raspberry pi based humanoid robotic system of claim 3, wherein the cloud brain system further includes an emotion recognition cloud model;
the cloud brain system is further used for inputting the received voice data and image data into the emotion recognition cloud model, and obtaining emotion analysis results output by the emotion recognition cloud model and feeding back the emotion analysis results to the processing unit.
5. The raspberry pi based humanoid robot system of claim 3, wherein the cloud brain system is further configured to retrain the speech recognition cloud model from the received speech data.
6. The raspberry pi based humanoid robotic system of claim 3, wherein the cloud brain system is further configured to retrain the image recognition cloud model from the received image data.
7. The raspberry-based humanoid robotic system of claim 3, wherein the cloud brain system is further configured to retrain the deep learning cloud model as a function of the received voice data and image data.
8. The raspberry-based humanoid robotic system of claim 3, wherein the cloud brain system is further configured to retrain the virtual twin cloud model from received video data of human movements.
9. The raspberry pi based humanoid robotic system of claim 1, further including a robot action library and a robot expression library;
the robot action library stores the action name and the action command of the robot;
the robot expression library stores the expression name and the expression instruction of the robot;
the processing unit is connected with the robot action library and the robot expression library and used for searching and obtaining corresponding action instructions and expression instructions according to the deep learning feedback result and controlling the entity robot to execute.
CN202111257216.2A 2021-10-27 2021-10-27 Humanoid entity robot system based on raspberry group Active CN113878595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111257216.2A CN113878595B (en) 2021-10-27 2021-10-27 Humanoid entity robot system based on raspberry group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111257216.2A CN113878595B (en) 2021-10-27 2021-10-27 Humanoid entity robot system based on raspberry group

Publications (2)

Publication Number Publication Date
CN113878595A CN113878595A (en) 2022-01-04
CN113878595B true CN113878595B (en) 2022-11-01

Family

ID=79014711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111257216.2A Active CN113878595B (en) 2021-10-27 2021-10-27 Humanoid entity robot system based on raspberry group

Country Status (1)

Country Link
CN (1) CN113878595B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100968944B1 (en) * 2009-12-14 2010-07-14 (주) 아이알로봇 Apparatus and method for synchronizing robot
CN101973031B (en) * 2010-08-24 2013-07-24 中国科学院深圳先进技术研究院 Cloud robot system and implementation method
CN103559491A (en) * 2013-10-11 2014-02-05 北京邮电大学 Human body motion capture and posture analysis system
CN104463146B (en) * 2014-12-30 2018-04-03 华南师范大学 Posture identification method and device based on near-infrared TOF camera depth information
CN106547884A (en) * 2016-11-03 2017-03-29 深圳量旌科技有限公司 A kind of behavior pattern learning system of augmentor
BR102017016910A8 (en) * 2017-08-07 2022-08-23 Db1 Global Software S/A AUXILIARY ROBOT WITH ARTIFICIAL INTELLIGENCE
CN111459290B (en) * 2018-01-26 2023-09-19 上海智臻智能网络科技股份有限公司 Interactive intention determining method and device, computer equipment and storage medium
CN108081288A (en) * 2018-02-02 2018-05-29 南京工业职业技术学院 A kind of intelligent robot
CN110253583B (en) * 2019-07-02 2021-01-26 北京科技大学 Human body posture robot teaching method and device based on wearable teaching clothes video
CN112861624A (en) * 2021-01-05 2021-05-28 哈尔滨工业大学(威海) Human body posture detection method, system, storage medium, equipment and terminal
CN113067952B (en) * 2021-03-31 2023-04-14 中国工商银行股份有限公司 Man-machine cooperation non-inductive control method and device for multiple robots

Also Published As

Publication number Publication date
CN113878595A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
US11625030B2 (en) Facilitating robotic control using a virtual reality interface
Hashimoto et al. Humanoid robots in waseda university—hadaly-2 and wabian
CN105825268B (en) The data processing method and system of object manipulator action learning
CN108297098A (en) The robot control system and method for artificial intelligence driving
CN108983636B (en) Man-machine intelligent symbiotic platform system
CN109172066B (en) Intelligent prosthetic hand based on voice control and visual recognition and system and method thereof
CN110774285A (en) Humanoid robot and method for executing dialogue between humanoid robot and user
JPWO2003035334A1 (en) Robot apparatus and control method thereof
KR20060079832A (en) Humanoid robot using emotion expression based on the embedded system
He et al. Development of distributed control system for vision-based myoelectric prosthetic hand
CN108058163A (en) A kind of cloud robot system with knowledge sharing and autonomous learning
Hackel et al. Humanoid robot platform suitable for studying embodied interaction
WO2022073467A1 (en) Dual-arm multitask parallel processing robot device for caregiving massages
CN113878595B (en) Humanoid entity robot system based on raspberry group
Natale et al. Icub
Vernon et al. The iCub cognitive architecture
Hanson et al. A neuro-symbolic humanlike arm controller for sophia the robot
CN109940636A (en) A kind of anthropomorphic robot performed for business
Suzuki et al. Development of an autonomous humanoid robot, iSHA, for harmonized human-machine environment
Dallard et al. Synchronized human-humanoid motion imitation
Jeong et al. Effective humanoid motion generation based on programming-by-demonstration method for entertainment robotics
Rett et al. Visual based human motion analysis: Mapping gestures using a puppet model
Boboc et al. Learning new skills by a humanoid robot through imitation
Wang et al. Collaboration robotic compliance grasping based on implicit human-computer interaction
CN115617169B (en) Voice control robot and robot control method based on role relation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221018

Address after: 200436 Area B, Floor 5, Building 1, No. 668, Shangda Road, Baoshan District, Shanghai

Applicant after: Shanghai Qingbao Engine Robot Co.,Ltd.

Address before: 200092 floor 1, 38 Tieling Road, Yangpu District, Shanghai

Applicant before: Shanghai Qingyun robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant