CN116631262A - Man-machine collaborative training system based on virtual reality and touch feedback device - Google Patents

Man-machine collaborative training system based on virtual reality and touch feedback device Download PDF

Info

Publication number
CN116631262A
CN116631262A CN202310633802.5A CN202310633802A CN116631262A CN 116631262 A CN116631262 A CN 116631262A CN 202310633802 A CN202310633802 A CN 202310633802A CN 116631262 A CN116631262 A CN 116631262A
Authority
CN
China
Prior art keywords
virtual reality
formation
machine
man
feedback device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310633802.5A
Other languages
Chinese (zh)
Inventor
曾洪
孙燈峰
翟佳佳
徐晓英
宋爱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202310633802.5A priority Critical patent/CN116631262A/en
Publication of CN116631262A publication Critical patent/CN116631262A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/085Special purpose teaching, e.g. alighting on water, aerial photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/307Simulation of view from aircraft by helmet-mounted projector or display
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a man-machine cooperative training system based on a virtual reality and a touch feedback device, which comprises: a physical simulation platform, a virtual reality device and a tactile feedback device; calculating formation states and motion guide information by a calculation center of the physical simulation platform, and cooperatively controlling man-machine formation through a formation organization algorithm; the virtual reality device comprises a head display and a handle; the user wears the head display to acquire a first visual angle image of the virtual agent, and controls the movement and interaction of the man-machine formation by using the handle; the haptic feedback device performs information haptic encoding on the calculated center and provides haptic feedback to the user. According to the invention, the training method combining immersive experience and tactile feedback is adopted, the actual field is not required to be occupied, the limitation of weak situation perception caused by visual limitation and overload is overcome, the control of the tactile perception type unmanned aerial vehicle is realized, the human-machine cooperative training effect is better improved, and the task completion rate and stability are improved.

Description

Man-machine collaborative training system based on virtual reality and touch feedback device
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a man-machine collaborative training system based on a virtual reality and touch feedback device.
Background
The human-machine formation is coordinated with actions through capability complementation among individuals, so that tasks which are difficult to be completed by a single robot can be completed, the efficiency of the whole system is improved, and the human-machine formation has the advantages of good redundancy, strong robustness and the like. Man-machine formation is therefore widely used in various fields. The method is widely applied to military frontier defense formation patrol, geographical resource survey and reconnaissance rescue in the military field; in the rescue and relief work, the efficiency of searching and relief work after the work can be greatly improved.
The application scene of man-machine formation is often complicated and changeable, however, in order to better improve the cooperation capability of man-machine formation, training man-machine formation to cooperate to complete tasks in the actual environment still has a plurality of defects: the construction site wastes manpower and material resources, cannot meet the training requirements in various special environments such as an extremely low-visibility environment, and the like.
Meanwhile, in a complex dynamic environment, the situation awareness capability of people is often weak due to the influence of factors such as limited visual channels, overload or non-line-of-sight of the visual channels, and the timely sensitive awareness capability of people on robot teammates, environmental threats and task targets is improved by a feedback channel except the visual channels.
Disclosure of Invention
In order to solve the problems, the invention discloses a man-machine collaborative training system based on a virtual reality and a touch feedback device, which combines immersive experience with touch feedback, does not occupy an actual field, overcomes the limitation of weak situation perception caused by visual limitation and overload, realizes control of a visual, auditory and tactile unmanned aerial vehicle, can be used as effective supplement beyond visual and auditory perception of operators, better improves the man-machine collaborative training effect, can also meet the man-machine formation collaborative training requirements under different special environments or different task demands, and has better effect, and the task completion rate and stability are improved.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a human-machine co-training system based on virtual reality and haptic feedback devices, comprising: physical simulation platform, virtual reality equipment and wearable haptic feedback device;
the physical simulation platform comprises a man-machine formation, a physical scene and an interface, wherein the man-machine formation moves in the physical scene and executes a set task; the physical scene comprises a plurality of interactable objects and obstacles; the man-machine formation comprises a virtual agent, a plurality of robots and a computing center; the calculation center calculates formation state information and motion guide information aiming at a task target by using robot sensor information and scene information, and cooperatively controls the motion and transformation of the man-machine formation by a formation organization algorithm; the interface may provide a way to modify physical scenarios, formation organization algorithms, etc.
The virtual reality device comprises a virtual reality head display and a virtual reality handle; the user wears a virtual reality head display to acquire a first visual angle image of the virtual agent, and controls the movement and transformation of man-machine formation by using a virtual reality handle to interact with an interactable object in a physical scene;
the wearable haptic feedback device has a plurality of haptic feedback modes, and performs haptic encoding on the formation state information and the motion guide information obtained by the computing center to provide haptic feedback for a user.
Preferably, the physical simulation platform, the physical scene and the man-machine formation model are physically built in a simulation engine Gazebo, and the computing center is encoded and realized in a Robot Operating System (ROS).
Preferably, the scene information includes a static map and task target points. Static maps have been obtained by the instant localization and mapping (SLAM). The task target point is dynamically set by the remote equipment through 5G network communication.
Preferably, the robot equipment includes: the system comprises a laser radar, a depth camera, an inertial sensor and a communication module.
Preferably, the computing center collects the information of the robot sensor, and cooperatively controls man-machine formation motion according to different formation organization algorithms and transformation instructions issued by users.
Preferably, the computing center generates a planning path of the next task target point by using an A-path planning algorithm according to a static map; and (3) obtaining the positioning of the man-machine formation by utilizing a self-Adaptive Monte Carlo (AMCL) positioning algorithm according to the sensor information of the robot, and further generating real-time motion guide information.
Preferably, the computing center tracks the motion and transformation of the man-machine formation in real time, and updates the current formation state information in real time.
Preferably, the virtual reality device includes a virtual reality head display configured with a gyroscope and an accelerometer, a handle configured with a gyroscope, an accelerometer, and a vibration unit.
Preferably, the virtual reality device communicates with the ROS through Unity3D, inputs a control command into the man-machine formation, and returns the first view image of the virtual agent.
Preferably, the wearable haptic feedback device has a plurality of haptic feedback modalities, and the formation status information and the motion guide information are haptic encoded and fed back.
The beneficial effects of the invention are as follows:
(1) The virtual reality equipment is utilized, so that the user can obtain the immersive experience effect of the real scene of the shoulder, and the human-machine formation collaborative training effect is more similar to the real scene; meanwhile, the tactile feedback channel is utilized to make up for the defect of visual feedback, so that the man-machine formation synergistic effect is better, the burden of the visual channel of a user is reduced, and the stability of man-machine formation synergy and the task completion efficiency are improved.
(2) The physical scene and formation organization algorithm can be adjusted through the interface, so that the requirement of man-machine formation collaborative training under different special environments or different task requirements is met;
(3) The actual field is not occupied, and the time and resources required for setting up the scene are greatly saved.
Drawings
Fig. 1 is a schematic diagram of a human-computer collaborative training system based on a virtual reality and haptic feedback device according to an embodiment of the present invention.
Description of the embodiments
The present invention is further illustrated in the following drawings and detailed description, which are to be understood as being merely illustrative of the invention and not limiting the scope of the invention.
Referring to fig. 1, a human-computer co-training system based on a virtual reality and a haptic feedback device according to the present invention includes: physical simulation platform, virtual reality equipment and wearable haptic feedback device;
the physical simulation platform comprises a man-machine formation, a physical scene and an interface, wherein the man-machine formation cooperatively moves in the physical scene and executes tasks set by a training setter; the physical scene comprises a plurality of interactable objects and obstacles; the man-machine formation comprises a virtual agent, a plurality of robots and a computing center; the calculation center calculates formation state information including formation transformation conditions, size transformation conditions, surrounding obstacles and the like and movement guiding information aiming at a task target by utilizing a robot laser radar, a depth camera, inertial sensor information, a static map, task target point coordinates, and cooperatively controls movement and transformation of a man-machine formation through a formation organization algorithm-a navigator-follower algorithm; the interface may provide a way to modify physical scenarios, formation organization algorithms, etc.
The virtual reality device comprises a virtual reality head display and a virtual reality handle; the user wears the virtual reality head display to acquire a first visual angle image of the virtual agent, and controls the movement of the virtual agent, formation transformation, formation size transformation and interaction with the interactable object in the physical scene in the man-machine formation by using the virtual reality handle.
The wearable haptic feedback device comprises three modes of vibration feedback, extrusion feedback and shearing feedback, and performs haptic encoding on formation state information and motion guide information obtained by the computing center to provide haptic feedback for a user.
In practical application, a training setter firstly adjusts a physical scene according to training requirements, sets a formation organization algorithm as a navigator-follower algorithm, sets an original task target point in the physical scene, stores coordinate information of the task target point as scene information into a man-machine formation computing center, and informs a user of visual characteristics of the task target point through voice. In the training process, the training setter can still dynamically set task target points to require users to explore or interact.
The user utilizes the virtual reality handle to control the motion of the virtual agent in the physical scene, and obtains a first visual angle image, environment sound and voice of a training setter of the virtual agent through the virtual reality head display, so that the user is immersed in clear knowledge of surrounding environment, formation of other members and the like. According to the change of the surrounding environment condition, a user can adjust formation forms, formation sizes and the like in time, for example, when meeting a narrow zone, the formation tightening of the man-machine can be controlled.
The method comprises the steps of collecting sensor information of a robot through a human-computer formation calculation center, obtaining the current position of the robot through a self-Adaptive Monte Carlo (AMCL) positioning algorithm, calculating the expected position of the robot under a pilot-follower algorithm according to the formation transformation and size transformation instruction conditions issued by a user, and adjusting the control speed of the robot through a PID algorithm and an artificial potential field method in a combined mode, so that cooperative control of the robots is achieved.
And generating a planning path from the current explored task target point to the next task target point by using an A-type algorithm through the scene static grid map obtained by the SLAM technology and the set task target point coordinate information, and generating a sub-target according to a curvature algorithm by the planning path, and generating the motion direction guiding information updated in real time by combining the current position of a pilot.
By means of real-time tracking of the user transformation instruction and the current position of the robot, formation state information including formation transformation, size transformation, surrounding obstacle conditions and the like of the current formation is updated in real time.
The user wears a wearable haptic feedback device. And the formation state information and the motion guide information updated in real time by the computing center are transmitted to a main control part of the wearable haptic feedback device through a wireless serial port.
The formation transformation situation and surrounding obstacle information will be mapped to a vibration feedback part, which uses the principle of similar movements to transmit information to the operator through regular vibration of the vibration motor array, for example, when the formation transformation is completed, the motor array will vibrate according to the corresponding formation shape, prompting the user that the formation transformation is completed, and the user no longer needs to divert attention to observe the completion situation.
The dimensional change would be mapped to a crush feedback that would convey information to the operator by controlling the tightening and loosening of the arm straps to varying degrees.
The movement direction guide is mapped to a shearing feedback part, and shearing feedback transmits direction information to an operator through shearing skin by motor end rotating bodies distributed at fixed angle intervals, so as to prompt the operator to go to the direction in which a task target point should go.
It should be noted that the foregoing merely illustrates the technical idea of the present invention and is not intended to limit the scope of the present invention, and that a person skilled in the art may make several improvements and modifications without departing from the principles of the present invention, which fall within the scope of the claims of the present invention.

Claims (10)

1. A man-machine cooperative training system based on virtual reality and a touch feedback device is characterized in that: comprising the following steps: physical simulation platform, virtual reality equipment and wearable haptic feedback device;
the physical simulation platform comprises a man-machine formation, a physical scene and an interface, wherein the man-machine formation moves in the physical scene and executes a set task; the physical scene comprises a plurality of interactable objects and obstacles; the man-machine formation comprises a virtual agent, a robot and a computing center; the calculation center calculates formation state information and motion guide information aiming at a task target by using robot sensor information and scene information, and cooperatively controls the motion and transformation of the man-machine formation by a formation organization algorithm; the interface provides a way for modifying physical scenes and forming a organization algorithm;
the virtual reality device comprises a virtual reality head display and a virtual reality handle; the user wears a virtual reality head display to acquire a first visual angle image of the virtual agent, and controls the movement and transformation of man-machine formation by using a virtual reality handle to interact with an interactable object in a physical scene;
the wearable haptic feedback device has a plurality of haptic feedback modes, and performs haptic encoding on the formation state information and the motion guide information obtained by the computing center to provide haptic feedback for a user.
2. The training method of the human-machine co-training system based on the virtual reality and the tactile feedback device according to claim 1, wherein:
the training setter adjusts the physical scene according to the training requirement, sets the formation cooperative control method, sets an original task target point in the physical simulation platform, wears the wearable haptic feedback device on the arm by a user, controls the motion of a virtual agent in the physical simulation platform by using a virtual reality handle, obtains a first visual angle image and environment sound of the virtual agent and voice of the training setter through a virtual reality head display, and immersively learns the surrounding environment, the conditions of other members of the formation and the task target point; the method comprises the steps of collecting sensor information of a robot in a calculation center of man-machine formation, sensing the formation information state by using touch force generated on the skin surface of an arm in a viewpoint blind area, obtaining the current position of the robot by using a positioning algorithm, calculating the expected position of the robot under a navigator-follower algorithm according to the formation transformation and size transformation instruction conditions issued by a user, and adjusting the control speed of the robot by using a PID algorithm and an artificial potential field method in a combined mode, so that cooperative control of each robot is realized.
3. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the physical simulation platform, the physical scene and the man-machine formation model are built in a physical simulation engine Gazebo, and the computing center is realized in a robot operating system code.
4. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the scene information comprises a static map and task target points; static maps have been obtained by instant localization and mapping techniques; the task target point is dynamically set by the remote equipment through 5G network communication.
5. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the robot equipment multiple sensor include: the system comprises a laser radar, a depth camera, an inertial sensor and a communication module.
6. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the computing center collects the information of the robot sensor, and cooperatively controls man-machine formation motion according to different formation organization algorithms and conversion instructions issued by users.
7. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the computing center generates a planning path of the next task target point by utilizing an A-path planning algorithm according to the static map; and (3) positioning the man-machine formation by utilizing a self-adaptive Monte Carlo positioning algorithm according to the sensor information of the robot, so as to generate real-time motion guide information.
8. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: and the computing center tracks the motion and transformation of the man-machine formation in real time and updates the current formation state information in real time.
9. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the virtual reality device comprises a virtual reality head display configured with a gyroscope and an accelerometer, and a handle configured with the gyroscope, the accelerometer and a vibration unit; the virtual reality device communicates with the ROS through the Unity3D, inputs a control instruction into the man-machine formation, and returns a first visual angle image of the virtual agent.
10. The human-machine co-training system based on virtual reality and haptic feedback device of claim 1, wherein: the wearable haptic feedback device is provided with a extrusion force feedback mode, a shearing force feedback mode and a vibration feedback mode, and performs haptic encoding and feedback on formation state information and motion guide information.
CN202310633802.5A 2023-05-31 2023-05-31 Man-machine collaborative training system based on virtual reality and touch feedback device Pending CN116631262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310633802.5A CN116631262A (en) 2023-05-31 2023-05-31 Man-machine collaborative training system based on virtual reality and touch feedback device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310633802.5A CN116631262A (en) 2023-05-31 2023-05-31 Man-machine collaborative training system based on virtual reality and touch feedback device

Publications (1)

Publication Number Publication Date
CN116631262A true CN116631262A (en) 2023-08-22

Family

ID=87591745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310633802.5A Pending CN116631262A (en) 2023-05-31 2023-05-31 Man-machine collaborative training system based on virtual reality and touch feedback device

Country Status (1)

Country Link
CN (1) CN116631262A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117492381A (en) * 2023-09-08 2024-02-02 中山大学 Robot collaborative pointing simulation visualization method, system, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117492381A (en) * 2023-09-08 2024-02-02 中山大学 Robot collaborative pointing simulation visualization method, system, equipment and storage medium
CN117492381B (en) * 2023-09-08 2024-06-11 中山大学 Robot collaborative pointing simulation visualization method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105425795B (en) Method and device for planning optimal following path
CN103389699B (en) Based on the supervisory control of robot of distributed intelligence Monitoring and Controlling node and the operation method of autonomous system
CN106444861B (en) A kind of robot for space remote control system based on three-dimension gesture
US8214098B2 (en) System and method for controlling swarm of remote unmanned vehicles through human gestures
Mashood et al. A gesture based kinect for quadrotor control
CN105242533A (en) Variable-admittance teleoperation control method with fusion of multi-information
CN105608746A (en) Method for virtual realizing of reality
CN103257707B (en) Utilize the three-dimensional range method of Visual Trace Technology and conventional mice opertaing device
CN116631262A (en) Man-machine collaborative training system based on virtual reality and touch feedback device
Aggravi et al. Connectivity-maintenance teleoperation of a uav fleet with wearable haptic feedback
CN113829343B (en) Real-time multitasking and multi-man-machine interaction system based on environment perception
CN109352654A (en) A kind of intelligent robot system for tracking and method based on ROS
CN111839926B (en) Wheelchair control method and system shared by head posture interactive control and autonomous learning control
Chen et al. A multichannel human-swarm robot interaction system in augmented reality
CN108762253A (en) A kind of man-machine approach to formation control being applied to for people's navigation system
CN116197899A (en) Active robot teleoperation system based on VR
CN103927778A (en) Method and system for environmental perception simulation of virtual human
CN108062102A (en) A kind of gesture control has the function of the Mobile Robot Teleoperation System Based of obstacle avoidance aiding
CN110442138A (en) A kind of control of robot cluster and barrier-avoiding method
US20220101477A1 (en) Visual Interface And Communications Techniques For Use With Robots
Fournier et al. Immersive virtual environment for mobile platform remote operation and exploration
Fu et al. A review of formation control methods for MAUV systems
Yu et al. Design and analysis of path planning for robotic fish based on neural dynamics model
Serpiva et al. Swarmpaint: Human-swarm interaction for trajectory generation and formation control by dnn-based gesture interface
Behnke et al. Learning to play soccer using imitative reinforcement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination