CN108858195A - A kind of Triple distribution control system of biped robot - Google Patents
A kind of Triple distribution control system of biped robot Download PDFInfo
- Publication number
- CN108858195A CN108858195A CN201810775243.0A CN201810775243A CN108858195A CN 108858195 A CN108858195 A CN 108858195A CN 201810775243 A CN201810775243 A CN 201810775243A CN 108858195 A CN108858195 A CN 108858195A
- Authority
- CN
- China
- Prior art keywords
- layer
- motion planning
- robot
- decision making
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention discloses the Triple distribution control systems of biped robot a kind of, belong to technical field of robot control.Present system includes interactive decision making layer, motion planning layer and hardware driving layer;Interactive decision making layer is used for human-computer interaction, control instruction is sent to motion planning layer, motion planning layer receives and parses through the control instruction of interactive decision making layer transmission and is handed down to hardware driving layer, hardware driving layer receives the control instruction of motion planning layer, while acquisition hardware data and being uploaded to motion planning layer.Triple distribution control system is separately operable on two computers and driving chip, and three layers cooperate, and is not depended on mutually.Three-tier system of the invention independently realizes that the degree of coupling is low, and it is convenient to develop, and can acquire ambient data in real time and control robot and make corresponding actions, does not need largely program to prepare, substantially increases system effectiveness to a certain extent.
Description
Technical field
The present invention relates to three layers of distributions of robot and its control system technical field more particularly to a kind of biped robot
Formula control system.
Background technique
In the control system of existing robot, integrated system is generallyd use, control function is all highly concentrated in one
Perhaps function and processing task all on host is all focused on by server or host server.But it is actually answering
In, integrated system is not able to satisfy the robot control field with huger functional requirement, and client needs at distribution
Reason ability, complete integrated distributed system.
Distributed system is the set of several computers, and inside is realized by communication network, to communicate as network foundation.Entirely
For distributed system using autonomous working or cooperation work two ways, each operation host can be with parallel work-flow and distribution control
System.
Distributed system has certain superiority compared with traditional communication network:
1) each operation host parallel operation in distributed system, it is meant that realize the independence in physical property with
And the cooperative on logic property;
2) distributed system reliability with higher, when one in system or multiple host break down, remaining
Independent host can with self-healing, reconstitute with the consistent system of original system function, adjust entire distributed system automatically
Section is restored to pre-fault status.
In recent years, biped robot has been widely used, and traditional technical method uses operator's off-line programing
The mode of guidance controls robot, and the program being stored in internal storage location is repeatedly carried out in robot, completes required operation
Movement, this control mode is restricted to robot extremely strong, does not have real-time, not to the sensing capability of extraneous information change
Foot, can not be adjusted correspondingly operation behavior according to the variation of environment, while a large amount of function needs a large amount of programming
Time greatly reduces system effectiveness to a certain extent.
Summary of the invention
In view of the above-mentioned problems, the present invention provides a kind of robot Triple distribution control system, surrounding can be acquired in real time
It environmental data and controls robot and makes corresponding actions, improve system effect.
The Triple distribution control system of biped robot provided by the invention a kind of, including interactive decision making layer, movement rule
Draw layer and hardware driving layer.The interactive decision making layer is used for multi-modal human-computer interaction, sends control instruction to the motion planning
Layer, and receive the data that the motion planning layer is sent;The motion planning layer is for receiving and parsing through the interactive decision making layer
The control instruction of transmission is simultaneously handed down to the hardware driving layer, while the friendship is sent to after the information of receiving sensor and processing
Mutual decision-making level;The hardware driving layer is used to receive the control instruction of the motion planning layer, at the same acquisition hardware data and on
It is transmitted to the motion planning layer.
The hardware driving layer includes motor driven and data acquisition, and the hardware driving layer executes following steps:
S11 receives the position data of the motion planning layer, given according to PID (proportional-integral-differential) control algolithm
Motor speed, turnning circle and acceleration;
S12, encoder position, temperature, voltage and the error message of acquisition hardware, feedback data give the hardware driving
Layer.
Further, the motion planning layer, external laser radar obtain point cloud data;External attitude transducer, is obtained
Take robot centroid position posture;The six-dimension force sensor in two leg vola of robot is connected, robot foot bottom stress and power are obtained
Square;Interactive decision making layer control instruction is received by Ethernet interface;Instruction is sent to hardware driving layer by CAN bus and receives number
According to.The function that the motion planning layer is realized includes that positive inverse kinetics solution is calculated, dynamics resolves, instruction parses and navigation programming.
The motion planning layer executes following steps:
S21, after starting up, the motion planning layer enters Auto-Sensing Mode, detection CAN device, joint of robot ID and
Joint of robot original state is inquired, if mistake occurs, corresponding error code is sent to the interactive decision making layer.
S22 receives the instruction that the motion planning layer is sent, and analyzes the instruction;
S23, if interactive decision making layer transmission is navigation instruction, information carries out the motion planning layer according to the map
The navigation programming makes robot reasonable avoidance on the walking path;
S24 carries out positive inverse kinetics solution calculation according to the robot ambulation path, calculates each joint of biped robot
Position in the process of walking;
S25, the data of the robot pose and vola power and torque that are obtained according to the sensor, carries out the dynamics
It resolves, maintains the dynamic stability of robot in the process of walking.
The position in each joint of the robot of calculating is sent to the hardware driving layer, and obtains the hardware driving by S26
The data of layer feedback.
S27 is sent to institute after being packaged the point cloud data of the data of hardware driving layer feedback and the laser radar
State interactive decision making layer.
Further, the interactive decision making layer, External microphone array carry out voice pickup and interact with speech synthesis;
External depth camera obtains color image and depth point cloud data;By Ethernet interface to motion planning layer send instruction and
Receive data.The function that the interactive decision making layer is realized include interactive voice, visual interactive, three-dimensional artificial, interface alternation and
Build figure navigation.The interactive decision making layer executes following steps:
S31 after computer starting, receives the data that the motion planning layer uploads, detecting each equipment has fault-free first;
S32 is carried out interactive voice, identifies the sound of people around, interacted with English and Chinese bilingual;In voice
In interactive process, key vocabularies are picked up, triggering is stored in the task sequence of interactive decision making layer, completes specified operation.
S33 carries out visual interactive, obtains face information using the RGB image of depth camera, carries out recognition of face and people
Face dynamically track.
S34 carries out three-dimensional artificial, using RVIZ the and GAZEBO emulation platform of ROS operating system, is advised according to the movement
Draw the real time kinematics posture for the joint angles dummy robot that layer is fed back to;
S35 carries out interface alternation, realizes to be integrated in the developing plug of RVIZ, the information in showing interface joint, user
It can control the movement of simple joint at interface or control robot and walk according to scheduled route;
S36 is carried out building figure navigation, is established three-dimensional artificial map in RVIZ using the point cloud data of depth camera, use
It is a little target point that family, which is arbitrarily picked up in map with mouse, and interactive decision making layer is handed down to the progress of motion planning layer after generating path
Navigation programming.
Advantages of the present invention is with good effect:
(1) the Triple distribution control system of biped robot of the invention, be divided into interactive decision making layer, motion planning layer and
Hardware driving layer, each layer ability independently realize that the degree of coupling is low, and it is convenient to develop, and improve the reliability and stability of system.
(2) the Triple distribution control system of biped robot of the invention can acquire in real time ambient data simultaneously
And control robot makes corresponding actions, does not need largely program to prepare, substantially increases system to a certain extent
Efficiency.
Detailed description of the invention
Fig. 1 is the functional schematic of the Triple distribution control system of the biped robot of the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of the Triple distribution control system of the biped robot of the embodiment of the present invention;
Fig. 3 is that the assignment decisions layer of the Triple distribution control system of the biped robot of the embodiment of the present invention executes process
Figure;
Fig. 4 is that the motion planning layer of the Triple distribution control system of the biped robot of the embodiment of the present invention executes process
Figure;
Fig. 5 is that the hardware driving layer of the Triple distribution control system of the biped robot of the embodiment of the present invention executes process
Figure.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that
Specific embodiment described herein is only used for explaining related content, rather than limitation of the invention.Further need exist for explanation
It is that for ease of description, only the parts related to the present invention are shown in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase
Mutually combination.Below with reference to the accompanying drawings 1-4 and in conjunction with the embodiments come the present invention will be described in detail.
As shown in Figure 1, the Triple distribution control system of biped robot, including interactive decision making layer, motion planning layer and
Hardware driving layer.Wherein, interactive decision making layer realize function include interactive voice, visual interactive, three-dimensional artificial, interface alternation and
Build figure navigation;The function that motion planning layer is realized includes that positive inverse kinetics solution is calculated, dynamics resolves, sensing data obtains, refers to
Enable parsing and navigation programming;The function that hardware driving layer is realized includes motor driven and data acquisition.Interactive decision making layer is for more
Mode human-computer interaction sends control instruction to motion planning layer, and receives the data of motion planning layer transmission.Motion planning layer is used
In the control instruction for receiving and parsing through the transmission of interactive decision making layer and it is handed down to hardware driving layer, receives the number of hardware driving layer feedback
According to, while interactive decision making layer is sent to after the information of receiving sensor and processing.Hardware driving layer is for receiving motion planning layer
Control instruction, while acquisition hardware data and being uploaded to motion planning layer.
As shown in Fig. 2, the Triple distribution control system of biped robot is separately operable in two computers and driving core
On piece, three layers cooperate, and do not depend on mutually.The computer of interactive decision making layer and the computer of motion planning layer are mounted on robot
In the control cabinet at top, communicated between each other by Ethernet interface.It is driven by 12 joints the leg of biped robot
Dynamic movement solidifies the embedded system for realizing hardware driving layer function, 12 all passes in the driving chip in each joint
Drive system in section forms hardware planning layer, unified to be communicated by CAN bus and the computer of motion planning layer.
Interactive decision making layer is carried out voice pickup and is interacted with speech synthesis by the port RS232 External microphone array;Pass through
The external depth camera of USB interface obtains color image and depth point cloud data;It is sent by Ethernet interface to motion planning layer
Instruction receives data.In interactive decision making layer, the interactive voice uses voice interaction module, can be carried out Chinese and English dialogue;It is described
Visual interactive realizes that the color image that the depth camera obtains is used for recognition of face, the depth using depth camera
The depth point cloud data that camera obtains is established for three-dimensional map and the figure of building navigates;The three-dimensional artificial obtains movement rule
Draw the robot joint angles that layer uploads and the in the three-dimensional model posture of real-time display robot;The interface alternation is integrated in
In the three-dimensional artificial, for controlling robot single joint movement, change joint of robot ID and setting dead-center position, and in real time
Show joint angles, voltage, temperature, error code and the enabled bit flag of robot.
Motion planning layer obtains point cloud data, for keeping away during navigation programming by the external laser radar of USB port
Barrier;By the external attitude transducer of USB port, robot centroid position posture is obtained;Machine is connected by two ports RS232
The six-dimension force sensor in two leg vola of people obtains robot foot bottom stress and torque;Interactive decision making layer is received by Ethernet interface
Control instruction;It is sent and is instructed to hardware driving layer by CAN bus, receive data.The robot mass center that motion planning layer obtains
The data of position and attitude and vola power and torque resolve for dynamics, keep the dynamic stability and stationarity of robot.
The operation for embedded system of hardware driving layer carries out motor on driving chip, through PWM (pulse width modulation)
Driving controls joint motions;Encoder data is obtained by SPI (Serial Peripheral Interface (SPI)), calculates the position in joint;Pass through
ADC (analog-digital converter) obtains the temperature and information of voltage of sensor, and the numerical value of joint overheat protector is arranged;Pass through CAN (control
Device local area network processed) bus receives the control instruction of motion planning layer, and feedback data.
As shown in figure 3, the robot information that motion planning layer uploads is obtained first after user starts interactive decision making layer,
Comprising data such as each joint position, voltage, enable bit, temperature and error codes, system is initialized, and is opened after initializing successfully
Each subsystem that begins is run.
Interactive voice is completed by voice module, is analyzed using the sound source that microphone picks up surrounding, passes through semantic analysis
Synthesis voice finishes playing interaction afterwards, and specific key vocabularies are in addition stored in voice module, after picking up to these vocabulary,
Corresponding task sequence can be triggered, which is handed down to motion planning layer and completes particular task.
Interface alternation is that the plug-in unit based on RVIZ is developed, the angle position in each joint of real-time display robot, electricity
Pressure, enable bit, temperature and error code, while user can also control simple joint in interface, such as ID modification, enable
Position, setting zero-bit and target position, after user triggers these operations, system generates corresponding control instruction according to communications protocol
Frame.
RGB image of the visual interactive based on depth camera, after obtaining color image, automatic program identification face letter
Breath marks out face title, dynamic tracing display face letter when face information is with information matches in the database are stored
Breath;If can not match, user's more new database is prompted.
Point cloud data of the figure navigation based on depth camera is built, three-dimensional map is generated in RVIZ, when user is in map
After picking up a target point with mouse, system according to the map in situation, automatic avoiding barrier cooks up reasonable walking road
Diameter, and motion planning layer is handed down in the path.
Three-dimensional artificial is completed based on RVIZ three-dimensional platform, and system obtains the joint angles that motion planning layer uploads, analog machine
The posture of device people's threedimensional model.Meanwhile GAZEBO and RVIZ shared data interface is, it can be achieved that associative simulation, user can be intuitively
See the real-time attitude of robot.GAZEBO is a kind of robot simulation software, is substantially carried out the emulation of robot dynamics.
As shown in figure 4, motion planning layer starting after, system is initialized first, detect joint ID sequence document and
Corresponding error code is then sent to interactive decision making in case of mistake by CAN device, inquiry joint of robot original state
Layer prompts user to break down.
Firstly, setting 500ms timing, motion planning layer send inquiry instruction frame and give hardware driving layer, inquire each joint and work as
Preceding angle, temperature, voltage, enable bit and error code.
The instruction that interactive decision making layer is sent is divided into two kinds after parsing, first is that the simple joint control that interface alternation is sent refers to
It enables, comprising position control, zero position, ID modification and the enable bit modification to particular joint, generates and instruct according to communications protocol
Frame is sent to hardware driving layer;Second is that the task sequence and guidance path that send, motion planning layer carries out navigation programming.In interaction
Under the elementary path that decision-making level generates, according to the data information that laser radar returns, determination occurs interim on current path
Barrier, system carry out contexture by self and hide, after clearing the jumps, return to initial path and continue on.Interactive decision making layer is right
The data of hardware driving layer feedback are parsed, and the current location returned according to joint and traveling posture carry out positive inverse kinematics
It resolves, calculates the position in each joint of subsequent time robot.Meanwhile being passed back according to six-dimension force sensor and attitude transducer
Power, torque and posture information carry out dynamics resolving, to the target position in joint on the basis of guaranteeing that Robotic Dynamic is stablized
It is adjusted, then generates command frame and be handed down to hardware driving layer.The data that interactive decision making layer also feeds back hardware driving layer into
It transmits with the point cloud data of laser radar after row parsing and gives interactive decision making layer.
As shown in figure 5, system is initialized, and detecting each joint driver, whether there is or not excess temperature guarantors after the starting of hardware driving layer
Error code is sent to motion planning layer if generating system mistake by shield, position limitation protection and overcurrent protection.
The control instruction that motion planning layer is sent to hardware driving layer is divided into two kinds, first is that inquiry instruction, inquires robot
Position, voltage, temperature, enable bit and the error code in each joint, hardware driving layer carry out data after receiving inquiry instruction
Then acquisition generates status frames and is sent to motion planning layer;Second is that control instruction, after hardware driving layer receives control instruction,
According to target position and current location, revolving speed, turnning circle and the acceleration of motor are calculated by pid control algorithm, are realized
Motor driven, makes that robot is steady, moves rapidly to designated position.
It will be understood by those of skill in the art that above embodiment is used for the purpose of clearly demonstrating the present invention, and simultaneously
Non- be defined to the scope of the present invention.For those skilled in the art, may be used also on the basis of disclosed above
To make other variations or modification, and these variations or modification are still in range disclosed by the invention.
Claims (8)
1. the Triple distribution control system of biped robot a kind of, which is characterized in that including interactive decision making layer, motion planning layer
With hardware driving layer;
The interactive decision making layer is used for multi-modal human-computer interaction, sends control instruction to the motion planning layer, and described in reception
The data that motion planning layer is sent;
The motion planning layer is used to receive and parse through the control instruction that the interactive decision making layer is sent and is handed down to the hardware
Layer is driven, the data of the hardware driving layer feedback are received, while being sent to the friendship after the information of receiving sensor and processing
Mutual decision-making level;
The hardware driving layer is used to receive the control instruction of the motion planning layer, while acquisition hardware data and being uploaded to institute
State motion planning layer;
The hardware driving layer includes motor driven and data acquisition, and the hardware driving layer executes following steps:
S11 receives the position data of the motion planning layer, gives motor speed, turnning circle according to pid control algorithm and adds
Speed;PID indicates proportional-integral-differential;
S12, encoder position, temperature, voltage and the error message of acquisition hardware, feedback data give the motion planning layer.
2. dcs according to claim 1, which is characterized in that the interactive decision making layer, motion planning
Layer is respectively arranged on one computer, and computer is mounted in the control cabinet at the top of robot, and computer passes through between each other
Ethernet interface is communicated;The hardware planning layer by biped robot leg all intra-articular drive system groups
At being communicated by CAN bus and the computer of motion planning layer;CAN indicates controller local area network.
3. dcs according to claim 1, which is characterized in that the motion planning layer, external laser
Radar obtains point cloud data;External attitude transducer obtains robot centroid position posture;Connect two leg vola of robot
Six-dimension force sensor obtains robot foot bottom stress and torque;Interactive decision making layer control instruction is received by Ethernet interface;Pass through
CAN bus sends instruction to hardware driving layer and receives data;CAN indicates controller local area network.
4. dcs according to claim 1 or 3, which is characterized in that the motion planning layer was realized
Function includes that positive inverse kinetics solution is calculated, dynamics resolves, instruction parses and navigation programming;The motion planning layer executes following step
Suddenly:
S21, after starting up, motion planning layer enters Auto-Sensing Mode, detection CAN device, joint of robot ID and inquiry machine
Corresponding error code is sent to the interactive decision making layer if mistake occurs by person joint's original state;ID indicates identification
Number;
S22 receives the instruction that the motion planning layer is sent, and analyzes the instruction;
S23, if interactive decision making layer transmission is navigation instruction, the motion planning layer according to the map navigate by information
Planning, makes robot avoidance on the walking path;
S24 carries out positive inverse kinetics solution calculation according to robot ambulation path, calculates each joint of biped robot in walking process
In position;
S25, the robot pose and vola power and torque obtained according to attitude transducer and six-dimension force sensor, carries out dynamics
It resolves;
The position in each joint of calculating is sent to hardware driving layer by S26, and obtains the data of hardware driving layer feedback;
S27 is sent to interactive decision making layer after being packaged the point cloud data of data and laser radar that hardware driving layer is fed back.
5. dcs according to claim 1, which is characterized in that the interactive decision making layer, external Mike
Wind array carries out voice pickup and interacts with speech synthesis;External depth camera obtains color image and depth point cloud data;
Instruction is sent to motion planning layer by Ethernet interface and receives data.
6. dcs according to claim 1 or 5, which is characterized in that the interactive decision making layer was realized
Function includes interactive voice, visual interactive, three-dimensional artificial, interface alternation and builds figure navigation;The interactive voice is handed over using voice
Mutual module carries out Chinese and English dialogue;The visual interactive uses depth camera, the color image that the depth camera obtains
For recognition of face, the depth point cloud data that the depth camera obtains is established for three-dimensional map and the figure of building navigates;
The robot joint angles of the three-dimensional artificial acquisition motion planning layer upload and in the three-dimensional model real-time display robot
Posture;The interface alternation is integrated in the three-dimensional artificial, for controlling robot single joint movement, change joint of robot
ID and setting dead-center position, and the joint angles of real-time display robot, voltage, temperature, error code and enabled bit flag.
7. dcs according to claim 1 or 5, which is characterized in that the interactive decision making layer executed
Step includes:
S31, after computer starting, the data that uploads of reception motion planning layer first, detecting each equipment has fault-free;
S32 is carried out interactive voice, identifies the sound of people around, interacted with English and Chinese bilingual;In interactive process
In, key vocabularies are picked up, triggering is stored in the task sequence of interactive decision making layer, completes specified operation;
S33 carries out visual interactive, obtains face information using the color image of depth camera, carries out recognition of face and face
Dynamically track;
S34 carries out three-dimensional artificial, using RVIZ the and GAZEBO emulation platform of ROS operating system, is fed back according to motion planning layer
The real time kinematics posture of the joint angles dummy robot returned;
S35 carries out interface alternation, and the plug-in unit by being integrated in RVIZ realizes that the information in showing interface joint, user is at interface
The movement or control robot for controlling simple joint are walked according to scheduled route;
S36 carries out building figure navigation, establishes three-dimensional artificial map, Yong Hu in RVIZ using the point cloud data of depth camera
Being picked up in map with mouse is a little target point, is handed down to motion planning layer behind interactive decision making layer generation path and carries out navigation rule
It draws.
8. the dcs according to claim 1 or 2, which is characterized in that the hardware driving layer
Operation for embedded system on driving chip, pass through PWM carry out motor driven, control joint motions;It is obtained and is encoded by SPI
Device data calculate the position in joint;The temperature and voltage of sensor are obtained by ADC, and the number of joint overheat protector is set
Value;The control instruction of motion planning layer, and feedback data are received by CAN bus;Wherein, PWM is pulse width modulation, SPI
For Serial Peripheral Interface (SPI), ADC is analog-digital converter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810775243.0A CN108858195A (en) | 2018-07-16 | 2018-07-16 | A kind of Triple distribution control system of biped robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810775243.0A CN108858195A (en) | 2018-07-16 | 2018-07-16 | A kind of Triple distribution control system of biped robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108858195A true CN108858195A (en) | 2018-11-23 |
Family
ID=64302031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810775243.0A Pending CN108858195A (en) | 2018-07-16 | 2018-07-16 | A kind of Triple distribution control system of biped robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108858195A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110587601A (en) * | 2019-08-20 | 2019-12-20 | 广西诚新慧创科技有限公司 | Control system applied to intelligent inspection robot |
CN111161424A (en) * | 2019-12-30 | 2020-05-15 | 浙江欣奕华智能科技有限公司 | Three-dimensional map determination method and determination device |
CN111414688A (en) * | 2020-03-18 | 2020-07-14 | 上海机器人产业技术研究院有限公司 | Mobile robot simulation system and method based on UNITY engine |
CN112025704A (en) * | 2020-08-25 | 2020-12-04 | 杭州湖西云百生科技有限公司 | Real-time distributed robot control method and system based on memory type database |
CN113183162A (en) * | 2021-04-28 | 2021-07-30 | 哈尔滨理工大学 | Intelligent nursing robot control method and system |
CN113485325A (en) * | 2021-06-16 | 2021-10-08 | 重庆工程职业技术学院 | SLAM mapping and autonomous navigation method for underground coal mine water pump house inspection robot |
CN113524166A (en) * | 2021-01-08 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Robot control method and device based on artificial intelligence and electronic equipment |
CN114102582A (en) * | 2021-11-11 | 2022-03-01 | 上海交通大学 | Electrical system for super-redundant robot and working method thereof |
CN114147721A (en) * | 2021-12-15 | 2022-03-08 | 东北大学 | Robot control system and method based on EtherCAT bus |
CN114393563A (en) * | 2021-12-21 | 2022-04-26 | 昆山市工研院智能制造技术有限公司 | Real platform of instructing of operation robot is removed in indoor branch of academic or vocational study |
CN114415653A (en) * | 2021-12-01 | 2022-04-29 | 中国船舶重工集团公司第七一九研究所 | Hybrid control system suitable for unmanned aerial vehicle under water |
CN114505853A (en) * | 2021-12-30 | 2022-05-17 | 爱普(福建)科技有限公司 | Remote layered management and control method and system for industrial robot |
CN114603540A (en) * | 2020-12-09 | 2022-06-10 | 国核电站运行服务技术有限公司 | Nuclear reactor pressure vessel detection manipulator control system |
CN114932961A (en) * | 2022-06-15 | 2022-08-23 | 中电海康集团有限公司 | Four-footed robot motion control system |
CN115026820A (en) * | 2022-06-09 | 2022-09-09 | 天津大学 | Control system and control method for man-machine cooperation assembly robot |
WO2023124326A1 (en) * | 2021-12-28 | 2023-07-06 | 上海神泰医疗科技有限公司 | Robot control method, control device, robot system, and readable storage medium |
CN116713992A (en) * | 2023-06-12 | 2023-09-08 | 之江实验室 | Electrical control system, method and device for humanoid robot |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101493855A (en) * | 2009-01-16 | 2009-07-29 | 吉林大学 | Real-time simulation system for under-driven double-feet walking robot |
CN102637036A (en) * | 2012-05-08 | 2012-08-15 | 北京理工大学 | Combined type bionic quadruped robot controller |
CN105856243A (en) * | 2016-06-28 | 2016-08-17 | 湖南科瑞特科技股份有限公司 | Movable intelligent robot |
KR20160116311A (en) * | 2016-09-23 | 2016-10-07 | 경북대학교 산학협력단 | Method for recognizing continuous emotion for robot by analyzing facial expressions, recording medium and device for performing the method |
-
2018
- 2018-07-16 CN CN201810775243.0A patent/CN108858195A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101493855A (en) * | 2009-01-16 | 2009-07-29 | 吉林大学 | Real-time simulation system for under-driven double-feet walking robot |
CN102637036A (en) * | 2012-05-08 | 2012-08-15 | 北京理工大学 | Combined type bionic quadruped robot controller |
CN105856243A (en) * | 2016-06-28 | 2016-08-17 | 湖南科瑞特科技股份有限公司 | Movable intelligent robot |
KR20160116311A (en) * | 2016-09-23 | 2016-10-07 | 경북대학교 산학협력단 | Method for recognizing continuous emotion for robot by analyzing facial expressions, recording medium and device for performing the method |
Non-Patent Citations (2)
Title |
---|
于佳: "仿人机器人的关节运动控制系统与传感器系统设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈贺: "仿人机器人控制系统的研究与开发", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110587601A (en) * | 2019-08-20 | 2019-12-20 | 广西诚新慧创科技有限公司 | Control system applied to intelligent inspection robot |
CN111161424A (en) * | 2019-12-30 | 2020-05-15 | 浙江欣奕华智能科技有限公司 | Three-dimensional map determination method and determination device |
CN111161424B (en) * | 2019-12-30 | 2023-06-02 | 浙江欣奕华智能科技有限公司 | Determination method and determination device for three-dimensional map |
CN111414688A (en) * | 2020-03-18 | 2020-07-14 | 上海机器人产业技术研究院有限公司 | Mobile robot simulation system and method based on UNITY engine |
CN112025704A (en) * | 2020-08-25 | 2020-12-04 | 杭州湖西云百生科技有限公司 | Real-time distributed robot control method and system based on memory type database |
CN114603540A (en) * | 2020-12-09 | 2022-06-10 | 国核电站运行服务技术有限公司 | Nuclear reactor pressure vessel detection manipulator control system |
CN113524166A (en) * | 2021-01-08 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Robot control method and device based on artificial intelligence and electronic equipment |
CN113183162A (en) * | 2021-04-28 | 2021-07-30 | 哈尔滨理工大学 | Intelligent nursing robot control method and system |
CN113485325A (en) * | 2021-06-16 | 2021-10-08 | 重庆工程职业技术学院 | SLAM mapping and autonomous navigation method for underground coal mine water pump house inspection robot |
CN114102582A (en) * | 2021-11-11 | 2022-03-01 | 上海交通大学 | Electrical system for super-redundant robot and working method thereof |
CN114415653A (en) * | 2021-12-01 | 2022-04-29 | 中国船舶重工集团公司第七一九研究所 | Hybrid control system suitable for unmanned aerial vehicle under water |
CN114147721A (en) * | 2021-12-15 | 2022-03-08 | 东北大学 | Robot control system and method based on EtherCAT bus |
CN114393563A (en) * | 2021-12-21 | 2022-04-26 | 昆山市工研院智能制造技术有限公司 | Real platform of instructing of operation robot is removed in indoor branch of academic or vocational study |
WO2023124326A1 (en) * | 2021-12-28 | 2023-07-06 | 上海神泰医疗科技有限公司 | Robot control method, control device, robot system, and readable storage medium |
CN114505853A (en) * | 2021-12-30 | 2022-05-17 | 爱普(福建)科技有限公司 | Remote layered management and control method and system for industrial robot |
CN114505853B (en) * | 2021-12-30 | 2023-09-12 | 爱普(福建)科技有限公司 | Remote layered control method and system for industrial robot |
CN115026820A (en) * | 2022-06-09 | 2022-09-09 | 天津大学 | Control system and control method for man-machine cooperation assembly robot |
CN114932961A (en) * | 2022-06-15 | 2022-08-23 | 中电海康集团有限公司 | Four-footed robot motion control system |
CN114932961B (en) * | 2022-06-15 | 2023-10-10 | 中电海康集团有限公司 | Motion control system of four-foot robot |
CN116713992A (en) * | 2023-06-12 | 2023-09-08 | 之江实验室 | Electrical control system, method and device for humanoid robot |
CN116713992B (en) * | 2023-06-12 | 2024-07-26 | 之江实验室 | Electrical control system, method and device for humanoid robot |
WO2024212359A1 (en) * | 2023-06-12 | 2024-10-17 | 之江实验室 | Electrical control system, method and apparatus for humanoid robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108858195A (en) | A kind of Triple distribution control system of biped robot | |
Sanna et al. | A Kinect-based natural interface for quadrotor control | |
CN114080583B (en) | Visual teaching and repetitive movement manipulation system | |
US9079313B2 (en) | Natural human to robot remote control | |
JP3994950B2 (en) | Environment recognition apparatus and method, path planning apparatus and method, and robot apparatus | |
US8265791B2 (en) | System and method for motion control of humanoid robot | |
CN112634318B (en) | Teleoperation system and method for underwater maintenance robot | |
Tölgyessy et al. | The Kinect sensor in robotics education | |
CN115469576B (en) | Teleoperation system based on human-mechanical arm heterogeneous motion space hybrid mapping | |
CN105945947A (en) | Robot writing system based on gesture control and control method of robot writing system | |
CN208468393U (en) | A kind of Triple distribution control system of biped robot | |
CN108828996A (en) | A kind of the mechanical arm remote control system and method for view-based access control model information | |
Chen et al. | A human-following mobile robot providing natural and universal interfaces for control with wireless electronic devices | |
CN111716365A (en) | Immersive remote interaction system and method based on natural walking | |
CN106468917A (en) | A kind of tangible live real-time video image remotely assume exchange method and system | |
Liu et al. | Vision AI-based human-robot collaborative assembly driven by autonomous robots | |
CN108062102A (en) | A kind of gesture control has the function of the Mobile Robot Teleoperation System Based of obstacle avoidance aiding | |
CN112000099A (en) | Collaborative robot flexible path planning method under dynamic environment | |
JP7309371B2 (en) | robot control system | |
CN114935341B (en) | Novel SLAM navigation computation video identification method and device | |
CN112757274B (en) | Human-computer cooperative operation oriented dynamic fusion behavior safety algorithm and system | |
JP2003266348A (en) | Robot device and control method therefor | |
CN115359222A (en) | Unmanned interaction control method and system based on augmented reality | |
Kamath et al. | Kinect sensor based real-time robot path planning using hand gesture and clap sound | |
US12019438B2 (en) | Teleoperation with a wearable sensor system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181123 |