CN108519814B - Man-machine interaction operating system - Google Patents

Man-machine interaction operating system Download PDF

Info

Publication number
CN108519814B
CN108519814B CN201810234048.7A CN201810234048A CN108519814B CN 108519814 B CN108519814 B CN 108519814B CN 201810234048 A CN201810234048 A CN 201810234048A CN 108519814 B CN108519814 B CN 108519814B
Authority
CN
China
Prior art keywords
unit
information
execution unit
force
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810234048.7A
Other languages
Chinese (zh)
Other versions
CN108519814A (en
Inventor
解仑
李连鹏
潘航
王志良
王先梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN201810234048.7A priority Critical patent/CN108519814B/en
Publication of CN108519814A publication Critical patent/CN108519814A/en
Application granted granted Critical
Publication of CN108519814B publication Critical patent/CN108519814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a human-computer interaction operating system which can improve the intelligence and the interactivity of a human-computer interaction system. The system comprises: the acquisition unit is used for acquiring operation scenes and real-time operation information; the information processing unit is used for constructing augmented reality display and issuing a task instruction; the visual interaction unit is used for displaying a remote operation scene and real-time operation information in an augmented reality manner; the force sense interaction unit is used for controlling the operation of the execution unit in real time and remotely based on force feedback; the control unit is used for issuing a control instruction to the driving unit according to the task instruction issued by the information processing unit and the information output by the force sense interaction unit; the driving unit is used for decomposing the control command issued by the control unit into moment and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit; and the execution unit is used for executing the operation tasks under the autonomous and man-machine cooperative operation modes according to the power provided by the power unit. The invention is suitable for the field of industrial mechanical arm control.

Description

Man-machine interaction operating system
Technical Field
The invention relates to the technical field of robot-human interaction, in particular to a human-computer interaction operating system.
Background
In recent years, the application field of the robot in modern production and life is continuously widened, on one hand, the robot is an important carrier for realizing intellectualization, digitalization and informatization of the manufacturing and processing industry, and the basic research and development, high-end manufacturing and field application of the robot are important marks for measuring the national technological innovation and high-precision technology strength; on the other hand, the wide application of the robot technology in the social life field also urges the generation of service robots and special robots which can be used for repairing, cleaning, transporting, rescuing, monitoring and the like, and the development of the robots has become an inevitable trend in the modern society. However, at present, the intelligence and the autonomous ability of the robot are still weak, and the human-computer interaction ability needs to be improved, which are all constraints for further development of the robot.
Aiming at the problems that at present, in many industrial fields, high-risk industries still need operating personnel to operate on site, the working environment is severe, the safety coefficient is low, safety accidents occur frequently, and the life and property safety is caused, the man-machine interaction capacity is improved, and the improvement of the intelligence of the robot is an inevitable choice for further development of the industrial robot.
Disclosure of Invention
The invention aims to provide a human-computer interaction operating system to solve the problems of relatively low degree of robot intelligence and insufficient human-computer interaction capability in the prior art.
To solve the above technical problem, an embodiment of the present invention provides a human-computer interaction operating system, including: the system comprises an acquisition unit, an information processing unit, a visual interaction unit, a force interaction unit, a control unit, a driving unit, a power unit and an execution unit; wherein,
the collection unit is used for collecting operation scenes and real-time operation information and transmitting the collected information to the information processing unit through the control unit, wherein the operation information comprises: pose information of each joint of the execution unit;
the information processing unit is used for constructing augmented reality display according to the acquired information and the actual pose of each joint of the execution unit fed back by the control unit, determining an operation mode, and issuing a task instruction to the control unit according to the determined operation mode, wherein the operation mode comprises the following steps: an autonomous operation mode and a man-machine cooperative operation mode;
the visual interaction unit is used for displaying a remote operation scene and real-time operation information in an augmented reality manner;
the force sense interaction unit is used for remotely controlling the execution unit to work in real time through the control unit based on force feedback according to the task instruction and the force information at the tail end of the execution unit received by the control unit;
the control unit is used for issuing a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit;
the driving unit is used for decomposing the control command issued by the control unit into torque and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit;
and the execution unit is used for executing the operation tasks under the autonomous and man-machine cooperative operation modes according to the power provided by the power unit.
Further, the acquisition unit includes: a vertical gyroscope, a binocular camera, a laser radar, a six-dimensional force sensor and an anti-collision ultrasonic sensor; wherein,
the vertical gyroscope is used for detecting a roll angle and a pitch angle of the execution unit, determining tail end attitude information of the execution unit and transmitting the determined tail end attitude information of the execution unit to the information processing unit through the control unit;
the binocular camera is used for acquiring the position states of the operation environment and the operation target in real time and transmitting the acquired position states of the operation environment and the operation target to the information processing unit through the control unit;
the laser radar is used for scanning the working environment, establishing a three-dimensional point cloud of the working environment and transmitting the established three-dimensional point cloud of the working environment to the information processing unit through the control unit;
the six-dimensional force sensor is used for acquiring force information at the tail end of the execution unit in real time and transmitting the acquired force information to the force sense interaction unit and the information processing unit through the control unit;
the anti-collision ultrasonic sensor is used for detecting the distance between the execution unit and the obstacle in real time in the operation process of the execution unit and transmitting the distance to the information processing unit through the control unit.
Furthermore, the force sense interaction unit adopts a serial-parallel translation and rotation mechanism, and is provided with a six-degree-of-freedom body constructed by aviation titanium alloy and a gravity compensation mechanism; wherein,
the serial-parallel translation and rotation mechanism comprises: delta translational parallel mechanism and serial rotation mechanism;
the six-degree-of-freedom body comprises: 3 translational degrees of freedom and 3 rotational degrees of freedom.
Further, the information processing unit is configured to construct an augmented reality display according to the acquired information, and includes:
according to the collected information, planning a working path based on a fuzzy genetic algorithm and generating guide information, wherein the planned working path comprises: the pose of each joint of the planned execution unit;
identifying a working target through a deep learning algorithm;
resolving the position and pose of each joint of the execution unit by a single-character-sample rotation vector method and a kinematics equation;
and performing augmented reality display on the far-end operation scene and the real-time operation information through a three-dimensional environment modeling technology and a virtual reality seamless fusion technology, and performing augmented reality display on operation path planning information and guidance information.
Further, the information processing unit is specifically configured to determine the environmental friendliness according to a preset environmental friendliness determination standard, and if the environment friendliness is achieved, an autonomous operation mode is adopted; if the environment is severe or the autonomous operation mode cannot complete the task, adopting a man-machine cooperative operation mode; according to the adopted operation mode, a task instruction is issued to the control unit, and the pose information of each joint of the execution unit in the operation execution process is monitored until the operation task is completed, wherein the task instruction comprises the following steps: and executing the calculation result of the pose of each joint, the path planning information and the guide information of the execution unit.
Further, the control unit is specifically configured to establish an impedance control mechanism based on dynamics and kinematics analysis, and issue a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit.
Further, the driving mode adopted by the driving unit is electric driving;
and the driving unit is also used for feeding back the working condition of the power unit to the control unit.
Further, the power unit includes: a servo motor and a UPS power supply;
the servo motor is used for providing power for the operation of the execution unit and controlling the pose of each joint of the execution unit so that the execution unit can complete corresponding operation tasks;
the UPS power supply is used for supplying power to the system.
Further, the execution unit is further used for feeding back the pose information of each joint to the driving unit;
the execution unit includes: a mechanical arm;
the robotic arm is a multiple-input multiple-output, non-linear, coupled end effector.
Furthermore, the force sense interaction unit simulates the physical interaction which actually exists by outputting the mechanical impedance in the augmented reality environment, so that the operator is physically connected with the augmented reality environment;
the force sense interaction unit adopts a six-dimensional force sensor to collect the acting force of an operator based on the impedance control of force feedback, and the closed loop of force control is realized.
The technical scheme of the invention has the following beneficial effects:
in the scheme, the operation scene and the real-time operation information are acquired through the acquisition unit, and the acquired information is transmitted to the information processing unit through the control unit; the information processing unit constructs augmented reality display according to the acquired information and the actual pose of each joint of the execution unit fed back by the control unit, determines an operation mode, and issues a task instruction to the control unit according to the determined operation mode; the visual interaction unit is used for displaying a far-end operation scene and real-time operation information in an augmented reality manner and assisting in finishing man-machine cooperative operation; the force sense interaction unit controls the execution unit to operate in real time and remotely through the control unit based on force feedback according to the task instruction and the force information at the tail end of the execution unit received by the control unit, so that man-machine cooperative operation is completed; the control unit issues a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit; the driving unit decomposes the control command issued by the control unit into torque and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit; the execution unit executes the operation tasks under the autonomous and man-machine cooperative operation modes according to the power provided by the power unit; the acquisition unit feeds back the working scene and the pose information of the execution unit in real time, and feeds back the pose information to the control unit to form closed-loop control, so that the working target is guaranteed to be completed smoothly. Therefore, autonomous or man-machine cooperative operation is carried out by adopting a man-machine interaction mode based on visual feedback and force feedback, the man-machine interaction operation efficiency can be effectively improved, the intelligence and the interactivity of a man-machine interaction system are improved, and the intelligent, autonomous and friendly interaction operation of the man-machine interaction operation system based on force/visual feedback is realized.
Drawings
Fig. 1 is a schematic structural diagram of a human-computer interaction operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the working principle of a visual interaction unit according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a working flow of a signal processing unit according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a workflow of a force sense interaction unit according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a control unit according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a working flow of a driving unit according to an embodiment of the present invention; (ii) a
FIG. 7 is a flowchart illustrating the operation of an execution unit according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an impedance control principle according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a human-computer interaction operating system aiming at the problems of relatively low intelligent degree and insufficient human-computer interaction capability of the existing robot.
As shown in fig. 1, a human-computer interaction operating system provided in an embodiment of the present invention includes: the system comprises an acquisition unit 11, an information processing unit 13, a visual interaction unit 14, a force sense interaction unit 15, a control unit 12, a driving unit 16, a power unit 17 and an execution unit 18; wherein,
the acquisition unit 11 is configured to acquire an operation scene and real-time operation information, and transmit the acquired information to the information processing unit 13 through the control unit 12, where the operation information includes: pose information of each joint of the execution unit;
the information processing unit 13 is configured to construct an augmented reality display according to the acquired information and the actual pose of each joint of the execution unit 18 fed back by the control unit 12, determine an operation mode, and issue a task instruction to the control unit 12 according to the determined operation mode, where the operation mode includes: an autonomous operation mode and a man-machine cooperative operation mode;
the visual interaction unit 14 is used for displaying a remote operation scene and real-time operation information in an augmented reality manner;
the force sense interaction unit 15 is used for remotely controlling the execution unit 18 to work in real time through the control unit 12 based on force feedback according to the task instruction received by the control unit 12 and the force information at the tail end of the execution unit 18;
the control unit 12 is configured to issue a control instruction to the driving unit 16 according to the task instruction issued by the information processing unit 13 and the pose of each joint of the expected execution unit 18 output by the force sense interaction unit 15;
the driving unit 16 is used for decomposing the control instruction issued by the control unit 12 into a moment and a rotating speed so as to drive the power unit 17 to provide power required by the operation of the execution unit 18;
and the execution unit 18 is used for executing the work tasks under the autonomous and man-machine cooperative work modes according to the power provided by the power unit 17.
According to the man-machine interaction operating system disclosed by the embodiment of the invention, the operation scene and the real-time operation information are acquired through the acquisition unit, and the acquired information is transmitted to the information processing unit through the control unit; the information processing unit constructs augmented reality display according to the received information, determines an operation mode, and issues a task instruction to the control unit according to the determined operation mode; the visual interaction unit is used for displaying a far-end operation scene and real-time operation information in an augmented reality manner and assisting in finishing man-machine cooperative operation; the force sense interaction unit controls the execution unit to operate in real time and remotely through the control unit based on force feedback according to the task instruction and the force information at the tail end of the execution unit received by the control unit, so that man-machine cooperative operation is completed; the control unit issues a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit; the driving unit decomposes the control command issued by the control unit into torque and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit; the execution unit executes the operation tasks under the autonomous and man-machine cooperative operation modes according to the power provided by the power unit; the acquisition unit feeds back an operation scene and the pose information of each joint of the execution unit in real time, and feeds back the pose information to the control unit to form closed-loop control, so that the operation target is guaranteed to be completed smoothly. Therefore, autonomous or man-machine cooperative operation is carried out by adopting a man-machine interaction mode based on visual feedback and force feedback, the man-machine interaction operation efficiency can be effectively improved, the intelligence and the interactivity of a man-machine interaction system are improved, and the intelligent, autonomous and friendly interaction operation of the man-machine interaction operation system based on force/visual feedback is realized.
In this embodiment, the pose information of each joint of the execution unit includes, but is not limited to: and the position, the moment and the rotating speed information of the tail end of the execution unit are determined according to the actual situation in the practical application.
In this embodiment, the human-computer interaction operating system based on force/visual feedback may be operated on a Windows operating system, an information processing unit in the system is an E5-2600V4 series processor based on an intel platform, the main frequency is 2400MHz, the processor performance is stable, the memory adopts a DDR4 memory technology, the total capacity reaches 2.5T, the ultrahigh main frequency and memory may meet the requirement of processing multi-source information such as a visual interaction unit, a force sense interaction unit, and an acquisition unit, the operating speed is fast, the operation is stable, I/O expansion and hierarchical storage are supported, and the human-computer interaction operating system based on force/visual feedback may be ensured to operate safely and stably.
In this embodiment, the acquisition unit includes but is not limited to: a vertical gyroscope, a binocular camera, a laser radar, a six-dimensional force sensor, an anti-collision ultrasonic sensor and the like; wherein,
the vertical gyroscope is used as an inertial reference of the multi-source sensor and is used for detecting a roll angle and a pitch angle of the execution unit, determining tail end attitude information of the execution unit, transmitting the determined tail end attitude information of the execution unit to the information processing unit through the control unit and assisting system operation; the vertical gyroscope is high-precision roll angle and pitch angle measuring equipment, and important performance indexes comprise: weight is less than 80g, starting time is less than 2s, angle precision is less than 0.8 degrees, zero temperature drift is less than 0.15 degrees/s, bandwidth is 50Hz, and the like.
The binocular camera is used for acquiring the position states of the operation environment and the operation target in real time, transmitting the acquired data to the information processing unit through the control unit for processing, and assisting an operator in a man-machine cooperative operation mode to complete an operation task; the binocular camera is mainly used for observing a real operation scene, and important performance indexes comprise: resolution 1920 x 1080, frame rate 25fps, color, interface USB 3.0.
The laser radar is used for scanning the operation environment, establishing three-dimensional point cloud of the operation environment, transmitting the established three-dimensional point cloud of the operation environment to the information processing unit through the control unit, and processing the three-dimensional point cloud of the operation environment through the information processing unit to realize operation path planning, guidance information, an execution unit and operation target positioning; important performance indicators include: the measuring distance is 0.1m-30m, the measuring range is 0-70 degrees, the measuring precision is 0.1-10m: +/-30 mm, 10-30m: +/-50 mm, the angular resolution is 0.25 degrees (360 °/1,440steps), and the scanning time is 25 ms.
The six-dimensional force sensor is used for acquiring force information at the tail end of the execution unit in real time, transmitting the acquired force information to the force sense interaction unit and the information processing unit through the control unit, assisting the system to sense the magnitude of the operation force and timely discovering emergency situations such as overlarge collision force under a blind area;
the anti-collision ultrasonic sensor is used for detecting the distance between the execution unit and the barrier in real time in the operation process of the execution unit and transmitting the distance to the information processing unit through the control unit, so that the robot can operate in a safe range.
In this embodiment, the control unit includes: an interface module and a controller (also called a mechanical arm controller); the interface module includes: the system comprises a human-computer interaction data interface, a target environment data interface, a task instruction transmission interface and a control instruction transmission interface; wherein,
the human-computer interaction data interface is used for the control unit to communicate with the visual interaction unit and the force sense interaction unit;
the target environment data interface includes: one or more of RS232, RS422, RS485, USB and CAN bus interfaces are used for realizing the communication between the control unit and the acquisition unit;
the task instruction transmission interface is used for realizing communication between the control unit and the information processing unit;
and the control instruction transmission interface is used for realizing the communication between the control unit and the driving unit.
The controller is used for receiving the task instruction issued by the information processing unit, the pose of each joint of the expected execution unit output by the force sense interaction unit and the information acquired by the acquisition unit, establishing an impedance control mechanism based on dynamics and kinematics analysis, and issuing a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit.
As shown in fig. 2, in this embodiment, a vertical gyroscope, a binocular camera, a laser radar, a six-dimensional force sensor, and an anti-collision ultrasonic sensor may be used to sense an environment, obtain an operation scene and real-time operation information, and transmit the sensed operation scene and real-time operation information to an information processing unit through a control unit; the information processing unit is used for constructing a visual system for augmented reality display through information fusion and processing according to a received operation scene, real-time operation information and the actual pose of each joint of the execution unit fed back by the control unit, and on the basis, as shown in fig. 3, the information processing unit is also used for judging the environmental friendliness according to a preset environmental friendliness judgment standard, using an autonomous operation mode for an operation environment (environmental friendliness) with clear environment, no smoke and high positioning precision, and adopting a man-machine cooperative operation mode when the environment condition is severe or the autonomous operation mode cannot complete a task; and then, according to a corresponding operation mode, issuing a task instruction to the control unit, and monitoring the pose information of each joint of the execution unit in the operation execution process until the operation task is completed, wherein the task instruction comprises the following steps: and executing the calculation result of the pose of each joint, the path planning information and the guide information of the execution unit.
In this embodiment, the executing step of constructing the visual system for augmented reality display through information fusion and processing may include:
according to the collected information, planning a working path based on a fuzzy genetic algorithm and generating guide information, wherein the planned working path comprises: the pose of each joint of the planned execution unit;
identifying a working target through a deep learning algorithm;
resolving the position and pose of each joint of the execution unit by a single-character-sample rotation vector method and a kinematics equation;
and performing augmented reality display on the far-end operation scene and the real-time operation information through a three-dimensional environment modeling technology and a virtual reality seamless fusion technology, and performing augmented reality display on operation path planning information and guidance information.
In this embodiment, the visual interaction unit is configured to perform augmented reality display on a far-end operation scene and real-time operation information according to the obtained auxiliary operation information through a three-dimensional environment modeling technology and a virtual reality seamless fusion technology, provide auxiliary information such as human-computer cooperative operation guidance information and operation path planning, and assist in completing human-computer cooperative operation; that is to say, the visual interaction unit carries out environment perception through a vertical gyroscope, a binocular camera, a laser radar, a six-dimensional force sensor and an anti-collision ultrasonic sensor, and realizes the augmented reality display of a far-end operation scene and real-time operation information by adopting a seamless fusion technology of a real scene and virtual information.
In this embodiment, during the man-machine cooperative operation, the operator may determine the task instruction according to the remote operation scene, the real-time operation information, and the task requirement displayed by the visual interaction unit.
In this embodiment, the force sense interaction unit includes: the handle, the transmission equipment, the sliding guide rail, the bracket and the like. The force sense interaction unit adopts a serial-parallel translation and rotation mechanism and is provided with a six-degree-of-freedom body constructed by aviation titanium alloy and a gravity compensation mechanism; wherein,
the serial-parallel translation and rotation mechanism comprises: the robot is characterized by comprising a unique Delta translational parallel mechanism and a unique series rotating mechanism, wherein the Delta translational parallel mechanism is a mechanism of the robot and has the advantages of high precision, strong bearing capacity, large working space of the series rotating mechanism, easy forward kinematics solution and the like;
the six-degree-of-freedom body comprises: 3 translational degrees of freedom (along X, Y, Z translational) and 3 rotational degrees of freedom (along X, Y, Z rotation) can meet highly flexible precision force feedback operation. The adopted aviation titanium alloy material has the characteristics of light weight, high strength, good corrosion resistance, good process performance and the like, and can meet the design requirements of the force sense interaction unit body.
The gravity compensation mechanism is a gravity term theoretical calculation formula for deducing the moment of each joint based on kinematics and dynamics analysis, and the gravity compensation mechanism is provided to relieve fatigue of an operator during long-time operation.
As shown in fig. 4, the operator issues control instructions to the execution unit through the force sense interaction unit, where the control instructions include, but are not limited to: the position and orientation information of each joint of the expected execution unit, and the signal of the control command is an analog signal; the force sense interaction unit transmits the output analog signal to the control unit, and the control unit performs operation constraint on the force sense interaction unit to prevent potential human-computer safety hazard caused by misoperation; the control unit sends a control instruction to the driving unit by combining a task instruction sent by the information processing unit and a control instruction sent by the force sense interaction unit; the driving unit decomposes the control command into torque and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit.
In this embodiment, the control unit has a high requirement on data real-time performance, and adopts a hierarchical architecture, as shown in fig. 5, a scheme of "LINUX immediate core + Embedded + PLC + EtherCAT bus" is specifically adopted, and Embedded PLCs are used in a LINUX environment to obtain a control cycle of 400us, calculate and process data, and implement real-time control. The EtherCAT real-time Ethernet bus adopts a ring redundancy topological structure, and the stable communication performance of the system is enhanced. The control unit receives the task instruction issued by the information processing unit; and then, task scheduling and configuration management are carried out, functions of task planning, information monitoring, algorithm scheduling, man-machine interaction and the like are completed, and a control instruction is issued to the driving unit.
In this embodiment, the driving unit includes: drive (robot arm drive).
In this embodiment, the driving unit receives the control instruction issued by the control unit, decomposes the received control instruction into a torque and a rotating speed, obtains driving parameters of each driving unit to drive the power unit to provide operation power, and feeds back the working condition of the power unit to the control unit.
In this embodiment, the electric drive is a drive system using an electric motor as a power source, and is a most common drive mode for orthogonal robot arms, and is characterized by including: the operation response is fast, and the drive power is strong, and photoelectric detection, transmission and control function are powerful to control mode is nimble various, and control effect is good.
In this embodiment, the operating principle of the driving unit is as shown in fig. 6, the driving unit decomposes the control command into a torque and a rotational speed, assuming that the control command is the movement of the end of the execution unit, first, the positive kinematic formula is as follows:
0T60T11)1T22)2T33)3T44)4T55)5T66) ⑴
wherein,0T11T22T33T44T55T6for a transformation matrix between torque and rotational speed of the respective joint of the actuator unit, theta1、θ2、θ3、θ4、θ5、θ6The joints are all represented in a degree of freedom,0T6the position and orientation information of each joint of the execution unit is shown.
In this embodiment, the power unit includes: a servo motor and a UPS power supply;
the servo motor is used for receiving a driving instruction issued by the driving unit, providing power for the operation of the execution unit, and accurately controlling the pose of each joint of the execution unit, so that the execution unit completes a corresponding operation task, in this embodiment, the selected servo motor has a rated voltage of 220V, a rated power of 1500W, a rated rotation speed of 2000rpm, a rated torque of 27n.m, and a weight: 25.5 Kg;
the UPS is used for avoiding power failure in the operation process of a system, ensuring uninterrupted power supply and efficiently and safely completing operation tasks, the selected UPS type is off-line, the output voltage range is 110V-300V, the rated power is 4.8KVA, the output frequency range is 47-53Hz, and the continuous operation for 1.5h after the equipment is powered off unexpectedly can be met.
In this embodiment, the execution unit includes: a mechanical arm; the mechanical arm is a multi-input multi-output highly nonlinear strongly-coupled tail end execution mechanism and is used for receiving a driving instruction issued by the driving unit and feeding back pose information of each joint to the driving unit. In this embodiment, the execution unit sends the control instruction in real time according to the control unit, and executes a plurality of job tasks in two job modes, namely, autonomous and man-machine cooperative.
In this embodiment, the working flow of the execution unit is as shown in fig. 7, after the mechanical arm is started, the mechanical arm self-checks the states of each joint and the communication state with the drive unit, after all joints are normal, a ready instruction is fed back to the drive unit, then the drive instruction is received, each joint moves according to the drive instruction, the acquisition unit detects the coordinate information of the obstacle from the pose information of each joint of the execution unit of the control unit and the ultrasonic detector in real time, and further judges whether the tail end of the execution unit reaches the specified coordinate and whether the tail end pose of the execution unit meets the operation requirement, when the 2 conditions are met, the tail end of the execution unit starts to operate on the operation target and feeds back the pose information of each joint in real time to the completion operation, and after the; when any one of the 2 conditions is not satisfied, each joint continues to move according to the driving command until the 2 conditions are satisfied.
Furthermore, the impedance control strategy is provided based on dynamics and kinematics analysis, is suitable for interactive control of the mechanical arm and the working environment, and solves the problem of flexibility in position control or force control.
As shown in fig. 8, the force sense interaction unit simulates physical interaction in reality by outputting mechanical impedance in the augmented reality environment, so that an operator is physically connected with the augmented reality environment, and the force sense control adopts an impedance control strategy, so that the problem of flexibility in position control or force control is solved, and the force sense interaction unit is suitable for interaction control of a robot arm and a working environment. F is the output force of the operator moving arm and the force sense interaction unit, FuFor the operator's force acting on the force sense interaction unit, FeFor feedback force of augmented reality environment, X is the position of the force sense interaction unit, Zu、Zm、ZeThe impedances of the operator, the force sense interaction unit and the virtual environment are respectively shown, J is the moment of inertia, and K is the gain coefficient of force feedback. The impedance control based on force feedback adopts a six-dimensional force sensor to collect the acting force of an operator, and the closed loop of force control is realized. The control unit compares the end actual output force with the desired output force and uses the difference to adjust the device.
In this embodiment, the force sense interaction unit can receive the force and the moment of feedback, increases the operation experience and feels, promotes human-computer interaction ability.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (3)

1. A human-computer interaction operating system, comprising: the system comprises an acquisition unit, an information processing unit, a visual interaction unit, a force interaction unit, a control unit, a driving unit, a power unit and an execution unit; wherein,
the collection unit is used for collecting operation scenes and real-time operation information and transmitting the collected information to the information processing unit through the control unit, wherein the operation information comprises: pose information of each joint of the execution unit;
the information processing unit is used for constructing augmented reality display according to the acquired information and the actual pose of each joint of the execution unit fed back by the control unit, determining an operation mode, and issuing a task instruction to the control unit according to the determined operation mode, wherein the operation mode comprises the following steps: an autonomous operation mode and a man-machine cooperative operation mode;
the visual interaction unit is used for displaying a remote operation scene and real-time operation information in an augmented reality manner;
the force sense interaction unit is used for remotely controlling the execution unit to work in real time through the control unit based on force feedback according to the task instruction and the force information at the tail end of the execution unit received by the control unit;
the control unit is used for issuing a control instruction to the driving unit according to the task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit;
the driving unit is used for decomposing the control command issued by the control unit into torque and rotating speed so as to drive the power unit to provide power required by the operation of the execution unit;
the execution unit is used for executing the operation tasks under the autonomous and man-machine cooperative operation modes according to the power provided by the power unit;
the control unit adopts a scheme of LINUX instant core + Embedded + PLC + EtherCAT bus;
wherein, the collection unit includes: a vertical gyroscope, a binocular camera, a laser radar, a six-dimensional force sensor and an anti-collision ultrasonic sensor; wherein,
the vertical gyroscope is used for detecting a roll angle and a pitch angle of the execution unit, determining tail end attitude information of the execution unit and transmitting the determined tail end attitude information of the execution unit to the information processing unit through the control unit;
the binocular camera is used for acquiring the position states of the operation environment and the operation target in real time and transmitting the acquired position states of the operation environment and the operation target to the information processing unit through the control unit;
the laser radar is used for scanning the working environment, establishing a three-dimensional point cloud of the working environment and transmitting the established three-dimensional point cloud of the working environment to the information processing unit through the control unit;
the six-dimensional force sensor is used for acquiring force information at the tail end of the execution unit in real time and transmitting the acquired force information to the force sense interaction unit and the information processing unit through the control unit;
the anti-collision ultrasonic sensor is used for detecting the distance between the execution unit and the obstacle in real time in the operation process of the execution unit and transmitting the distance to the information processing unit through the control unit;
the force sense interaction unit adopts a serial-parallel translation and rotation mechanism, and is provided with a six-degree-of-freedom body constructed by aviation titanium alloy and a gravity compensation mechanism; wherein,
the serial-parallel translation and rotation mechanism comprises: delta translational parallel mechanism and serial rotation mechanism;
the six-degree-of-freedom body comprises: 3 translational degrees of freedom and 3 rotational degrees of freedom;
wherein, the information processing unit is used for constructing the augmented reality display according to the collected information and comprises:
according to the collected information, planning a working path based on a fuzzy genetic algorithm and generating guide information, wherein the planned working path comprises: the pose of each joint of the planned execution unit;
identifying a working target through a deep learning algorithm;
resolving the position and pose of each joint of the execution unit by a single-character-sample rotation vector method and a kinematics equation;
performing augmented reality display on a far-end operation scene and real-time operation information through a three-dimensional environment modeling technology and a virtual reality seamless fusion technology, and performing augmented display on operation path planning information and guidance information;
the information processing unit is specifically used for judging the environmental friendliness according to a preset environmental friendliness judgment standard, and if the environment friendliness is good, an autonomous operation mode is adopted; if the environment is severe or the autonomous operation mode cannot complete the task, adopting a man-machine cooperative operation mode; according to the adopted operation mode, a task instruction is issued to the control unit, and the pose information of each joint of the execution unit in the operation execution process is monitored until the operation task is completed, wherein the task instruction comprises the following steps: calculating the pose of each joint, path planning information and guidance information by the execution unit;
the control unit adopts a layered step-by-step architecture, is specifically used for establishing an impedance control mechanism based on dynamics and kinematics analysis, and issues a control instruction to the driving unit according to a task instruction issued by the information processing unit and the pose of each joint of the expected execution unit output by the force sense interaction unit;
wherein the power unit includes: a servo motor and a UPS power supply;
the servo motor is used for providing power for the operation of the execution unit and controlling the pose of each joint of the execution unit so that the execution unit can complete corresponding operation tasks;
the UPS is used for supplying power to the system;
the force sense interaction unit simulates physical interaction which actually exists by outputting mechanical impedance in the augmented reality environment, so that an operator is physically connected with the augmented reality environment;
the force sense interaction unit adopts a six-dimensional force sensor to collect the acting force of an operator based on the impedance control of force feedback, and the closed loop of force control is realized.
2. The human-computer interaction operation system according to claim 1, wherein the driving mode adopted by the driving unit is electric driving;
and the driving unit is also used for feeding back the working condition of the power unit to the control unit.
3. The human-computer interaction operation system according to claim 1, wherein the execution unit is further configured to feed back pose information of each joint to the driving unit;
the execution unit includes: a mechanical arm;
the robotic arm is a multiple-input multiple-output, non-linear, coupled end effector.
CN201810234048.7A 2018-03-21 2018-03-21 Man-machine interaction operating system Active CN108519814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810234048.7A CN108519814B (en) 2018-03-21 2018-03-21 Man-machine interaction operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810234048.7A CN108519814B (en) 2018-03-21 2018-03-21 Man-machine interaction operating system

Publications (2)

Publication Number Publication Date
CN108519814A CN108519814A (en) 2018-09-11
CN108519814B true CN108519814B (en) 2020-06-02

Family

ID=63433852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810234048.7A Active CN108519814B (en) 2018-03-21 2018-03-21 Man-machine interaction operating system

Country Status (1)

Country Link
CN (1) CN108519814B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11855831B1 (en) 2022-06-10 2023-12-26 T-Mobile Usa, Inc. Enabling an operator to resolve an issue associated with a 5G wireless telecommunication network using AR glasses

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109048924A (en) * 2018-10-22 2018-12-21 深圳控石智能系统有限公司 A kind of intelligent robot flexible job devices and methods therefor based on machine learning
CN109491505B (en) * 2018-11-13 2021-01-12 天津城建大学 Percussion instrument playing simulation system and force feedback device based on virtual reality technology
CN110039547B (en) * 2019-05-27 2021-08-10 清华大学深圳研究生院 Man-machine interaction terminal and method for remote operation of flexible mechanical arm
CN110389664B (en) * 2019-06-25 2020-09-01 浙江大学 Fire scene simulation analysis device and method based on augmented reality
CN113064417B (en) * 2019-12-13 2022-11-15 苏州宝时得电动工具有限公司 Self-moving equipment and working method thereof
CN111168660B (en) * 2020-01-21 2021-05-07 北京科技大学 Redundant degree of freedom hydraulic heavy load robot arm initiative safety system
CN111443619B (en) * 2020-04-17 2021-06-08 南京工程学院 Virtual-real fused human-computer cooperation simulation method and system
CN112068457B (en) * 2020-08-17 2022-08-05 杭州电子科技大学 WebGL-based PLC configuration virtual simulation experiment system
CN112936267B (en) * 2021-01-29 2022-05-27 华中科技大学 Man-machine cooperation intelligent manufacturing method and system
CN113618731A (en) * 2021-07-22 2021-11-09 中广核研究院有限公司 Robot control system
CN114332936A (en) * 2021-12-29 2022-04-12 北京理工大学 Visual feedback method for improving hand motion precision in compact space in virtual environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006004894A2 (en) * 2004-06-29 2006-01-12 Sensable Technologies, Inc. Apparatus and methods for haptic rendering using data in a graphics pipeline
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN106527177A (en) * 2016-10-26 2017-03-22 北京控制工程研究所 Multi-functional and one-stop type remote control design, the simulation system and method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728642B (en) * 2017-10-30 2021-03-09 北京博鹰通航科技有限公司 Unmanned aerial vehicle flight control system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006004894A2 (en) * 2004-06-29 2006-01-12 Sensable Technologies, Inc. Apparatus and methods for haptic rendering using data in a graphics pipeline
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN106527177A (en) * 2016-10-26 2017-03-22 北京控制工程研究所 Multi-functional and one-stop type remote control design, the simulation system and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"重载工业机器人控制关键技术综述";游玮,孔民秀;《机器人技术与应用》;20130502(第5期);第13-19页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11855831B1 (en) 2022-06-10 2023-12-26 T-Mobile Usa, Inc. Enabling an operator to resolve an issue associated with a 5G wireless telecommunication network using AR glasses

Also Published As

Publication number Publication date
CN108519814A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN108519814B (en) Man-machine interaction operating system
Gambao et al. A new generation of collaborative robots for material handling
US20180243899A1 (en) Remote control robot system
CN102085664B (en) Autonomous operation forestry robot intelligent control system
CN103753601A (en) Teleoperation mechanical arm of space cascade rotary joint type and combination thereof
CN101791750A (en) Robot remote control welding system and method used for remote welding
CN111515951A (en) Teleoperation system and teleoperation control method for robot
Xiao et al. Visual servoing for teleoperation using a tethered uav
CN111015681A (en) Communication machine room inspection robot system
CN214846390U (en) Dynamic environment obstacle avoidance system based on automatic guided vehicle
CN111203879A (en) Mechanical arm spraying robot capable of moving automatically
CN107553467B (en) Multifunctional master hand device with low gravity center
Hamaza et al. 2d contour following with an unmanned aerial manipulator: Towards tactile-based aerial navigation
CN111376263B (en) Human-computer cooperation system of compound robot and cross coupling force control method thereof
CN107662210A (en) A kind of resistance to irradiation dynamic power machine hand control system
CN112894827B (en) Method, system and device for controlling motion of mechanical arm and readable storage medium
CN113290549A (en) Special robot and control method thereof
CN113618731A (en) Robot control system
CN115847428B (en) Mechanical assembly auxiliary guiding system and method based on AR technology
CN113021344A (en) Master-slave heterogeneous teleoperation robot working space mapping method
CN110539315B (en) Construction robot based on virtual reality control
CN114589680A (en) Control device, special robot system and control method thereof
CN202622813U (en) Control system of full-driving combined adsorption type crawling robot
CN113263494A (en) Control device and special robot system
Kim et al. Preventive maintenance and remote inspection of nuclear power plants using tele-robotics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant