CN117021117B - Mobile robot man-machine interaction and positioning method based on mixed reality - Google Patents
Mobile robot man-machine interaction and positioning method based on mixed reality Download PDFInfo
- Publication number
- CN117021117B CN117021117B CN202311287066.9A CN202311287066A CN117021117B CN 117021117 B CN117021117 B CN 117021117B CN 202311287066 A CN202311287066 A CN 202311287066A CN 117021117 B CN117021117 B CN 117021117B
- Authority
- CN
- China
- Prior art keywords
- robot
- mixed reality
- depth
- coordinate system
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 74
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000008569 process Effects 0.000 claims abstract description 31
- 238000006243 chemical reaction Methods 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000033001 locomotion Effects 0.000 abstract description 14
- 230000002452 interceptive effect Effects 0.000 abstract description 10
- 230000009471 action Effects 0.000 abstract description 8
- 230000000007 visual effect Effects 0.000 abstract description 5
- 210000004247 hand Anatomy 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 241000894007 species Species 0.000 description 5
- 210000003811 finger Anatomy 0.000 description 4
- 210000003813 thumb Anatomy 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004898 kneading Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 208000013860 rhabdoid tumor of the kidney Diseases 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/006—Controls for manipulators by means of a wireless system for controlling one or several manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1653—Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
Abstract
The invention discloses a mobile robot man-machine interaction and positioning method based on mixed reality, and belongs to the field of man-machine interaction. According to the invention, the interactive operation is completed through the pointing and clicking actions of the hands, so that the interactive mode of the user and the robot is more in line with the habit of human behavior and is natural and effective, meanwhile, the convenience of the interactive process is improved through the spaced interaction of the remote object, in addition, the surrounding box of the robot and the real space surface model are transparent to the user, so that the user can interact with the environment directly under the view angle of the user, and the visual feeling of the user to the machine in the interactive process is enhanced. According to the invention, the mixed reality space coordinate of the target point of the user is acquired based on the remote interaction of the hands, so that the connection between the mixed reality world and the real world is enhanced, the user can quickly mark and position any point in space by using simple hand motions under the condition that a map is not established through mixed reality, the efficiency of setting and positioning the target point is greatly improved, and the task allocation and the movement control of the user on the mobile robot are more convenient.
Description
Technical Field
The invention belongs to the field of man-machine interaction, and particularly relates to a mobile robot man-machine interaction and positioning method based on mixed reality.
Background
With the rapid development of man-machine interaction technology, the development and the promotion of intelligent factories and intelligent life are urgent to have a natural and efficient interaction mode, and mixed reality enables users to freely switch between virtual space and real space, so that visual and natural 3D interaction among people, computers and environments is realized, supervision and control of increasingly complex intelligent systems can be more rapidly completed by simple operation with weaker professional requirements, and technical support provided by interaction of hands, voice interaction and eye/head interaction is particularly achieved, users can perform more natural operation on the intelligent systems in the mixed reality space by natural language expression or physical action, and further interaction and robot behavior can be effectively and naturally controlled.
Mobile robots are usually interacted and controlled by a handle or a program, and operators need professional interaction learning, wherein the handle control needs continuous operation and has a small range of action, and the program control is extremely dependent on the robustness of the program and has poor flexibility. Gesture control and voice control are also present in some factories, however, gestures or voices are in one-to-one correspondence with the movements of the robot, and discontinuous control can result in precision loss in the moving process of the robot, and meanwhile, the interaction range is very limited. Aiming at the problems of the mobile robot interaction control mode, the invention provides a natural and efficient mobile robot interaction and positioning method based on mixed reality, and an operator can finish high-precision robot space positioning through instinctive pointing click type interaction by the method, so that the mobile control of a robot is realized.
Disclosure of Invention
The invention aims to provide a mobile robot man-machine interaction and positioning method based on mixed reality, aiming at the problems of the traditional mobile robot interaction control mode and the natural interaction requirement of an intelligent man-machine interaction system. The method is based on mixed reality to realize a hand remote interaction mode capable of directly interacting with real objects and environments under a user visual angle, a solution for obtaining the position of a space point clicked by a user is provided based on the mode, the category and the position of the mobile robot are obtained by combining RGB video stream and depth point cloud, the space positioning of the mobile robot under a mixed reality space and a robot coordinate system is realized, and finally a moving instruction is sent to the robot to enable the robot to move to a target point.
In order to achieve the above purpose, the invention adopts the following technical scheme: a mobile robot man-machine interaction and positioning method based on mixed reality comprises the following steps:
step S1: establishing a target recognition model by using a sample image of the mobile robot, and performing real-time robot species recognition by using the target recognition model, wherein the recognition scheme is that a PV of a mixed reality device HoloLens2 acquires a video stream, the target recognition model deployed at a computer end performs real-time recognition, and a recognition result comprises the species of the robot and the image coordinates of the robot in each frame of image;
step S2: acquiring a coordinate conversion relation between a mixed reality space and a mobile robot based on the depth point cloud, and realizing the positioning of the robot in the mixed reality space; the specific process is as follows:
(1) obtaining an internal reference matrix and an external reference matrix of a holonens 2 depth camera of a mixed reality device, obtaining a continuous depth frame stream, and mapping the points of an image coordinate system of the depth camera and the depth data of a depth frame data buffer area one by one to obtain a depth point cloud of the environment;
(2) obtaining an internal reference matrix and an external reference matrix of a holonens 2 PV camera of the mixed reality equipment, converting a coordinate system of a depth camera and a coordinate system of the PV camera, completing mapping of image coordinates of the depth camera and image coordinates of the PV camera, obtaining a depth value of the robot, and converting the depth value according to a distance formula to obtain a mixed reality space coordinate of the robot;
(3) an "initialization" process is performed: the current advancing direction of the mobile robot is adjusted to be a certain coordinate axis direction of a robot coordinate system, so that the robot moves forwards by a fixed distance, mixed reality space coordinates of two positions of the robot are obtained, meanwhile, two positions of the robot in the robot coordinate system are obtained from the robot through ROS node communication, and the conversion relation between the mixed reality space coordinate system and the robot coordinate system is calculated by combining the coordinate axis direction of the robot coordinate system, the moving distance and the mixed reality space coordinates of the two positions;
step S3: selecting a mobile robot through remote interaction space clicking of a hand, and setting a robot mobile target point through space clicking; the specific process is as follows:
(1) according to the target recognition result, combining the coordinates of the robot in the mixed reality space obtained in the step S2, and placing a minimum hexahedral bounding box corresponding to the robot in the mixed reality scene, wherein the bounding box is invisible to a user but can be interacted with;
(2) the user points to and clicks the mobile robot through hand remote interaction, virtual hand rays interacted by hands collide with the bounding box of the robot, the selected robot individual is determined, and the selection operation of the mobile robot is realized;
(3) the user points and clicks a mobile target point in a real scene through hand remote interaction, a specific position pointed by a virtual hand ray prompt of the hand interaction is clicked, and a mixed reality device HoloLens2 marks the point as a mobile target point of the mobile robot in the mixed reality scene;
step S4: scanning by using mixed reality equipment to obtain a space surface model of the environment, and acquiring mixed reality space coordinates of a moving target point by combining virtual hand light rays of hand remote interaction; the space surface model is a mesh grid based on a triangular patch model obtained by scanning a real environment by using a mixed reality device HoloLens2, and mixed reality space coordinates of a moving target point are obtained from intersection point data of virtual hand rays and the space surface model;
step S5: obtaining a target point and a robot coordinate under a robot coordinate system by utilizing a coordinate conversion relation, and realizing space positioning under the robot coordinate system;
step S6: generating a moving instruction according to the coordinates of the robot and the target point in the robot coordinate system so as to guide the moving robot to the moving target point; the moving instruction comprises the current position of the robot, the position of a moving target point and command information of 'forward' and 'stop' under the robot coordinate system.
Further, step S2 includes a process of calculating a depth value of the robot using a neighborhood averaging method:
(1) in the depth point cloud, a field circle is determined by taking a corresponding point of a robot depth image coordinate as a center and fixing an r value as a radius, and invalid points of depth values in the field circle are removed;
(2) comparing the depth values of the center point and the neighborhood points, and calculating the average value of the depths of the center point and the points with the difference between the depth values of the center point and the depth values of the center point being smaller than a set threshold value; the application of the average value reduces occasional errors in the depth values, while the comparison of the depth values reduces significant errors in the average value due to possible corner points.
Further, the initializing process in step S2 includes a feature point matching algorithm, where the feature point matching algorithm is used to eliminate errors caused by drift of the central point of observation before and after the movement of the robot, and ensure that the target identification observation points of the robot in two positions are the same point on the robot, so as to improve the accuracy of coordinate system conversion.
Further, the specific process of constructing the mesh grid based on the triangular patch model in step S4 is as follows:
(1) scanning a fine environmental mesh in advance, marking environmental objects which are fixed such as floors, ceilings and the like and are commonly used for setting up target points;
(2) the system scans the environment in real time during operation, improves the density of triangular patches of the mesh grid of the object in the non-fixed environment, combines fine description of the fixed object by pre-scanning, and realizes high-precision dynamic construction of the mesh grid under lower computing power resources, thereby improving the positioning precision of the moving target point.
The invention provides a mobile robot man-machine interaction and positioning device based on mixed reality, which comprises mixed reality equipment holonens 2, a computer and a mobile robot;
the mixed reality equipment HoloLens2 is used for carrying out remote hand interaction between a user and an environment and between the user and a mobile robot, shooting and acquiring a real scene video stream and a depth frame stream, acquiring mixed reality space coordinates of a mobile target point and the mobile robot, converting a mixed reality space coordinate system and a robot coordinate system, packaging and sending a mobile instruction;
the computer is used for receiving the real scene video stream, carrying out target recognition, returning a target recognition result to the mixed reality equipment holonens 2, receiving a moving instruction and sending the moving instruction to the mobile robot;
the mobile robot has autonomous navigation positioning capability, communication capability and autonomous movement capability, and is used for establishing an environment map under a robot coordinate system and positioning the environment map, sending the robot position under the robot coordinate system, receiving a movement instruction, and planning a path and moving according to the position of a target point in the instruction.
As described above, the beneficial effects of the invention are as follows:
(1) According to the invention, the interactive operation is completed through the pointing and clicking actions of the hands, so that the interactive mode of the user and the robot is more consistent with the habit of human behavior and is natural and effective, meanwhile, the convenience of the interactive process is improved through the spaced interaction of remote objects, in addition, the surrounding box of the robot and the real space surface model are transparent to the user, so that the user can directly interact with the robot and the environment under the view angle of the user, the visual feeling of the user to the robot in the interactive process is enhanced, and the participation and the interestingness of the user are improved.
(2) The mixed reality space coordinate based on the hand remote interaction provided by the invention strengthens the connection between the mixed reality world and the real world, so that the user can quickly mark and position any point in space by using simple hand motions under the condition of not building a map through mixed reality, the efficiency of setting and positioning the target point is greatly improved, and the task allocation and the movement control of the user on the mobile robot are more convenient.
Drawings
Fig. 1 is a flow chart of a method for mobile robot interaction and positioning based on mixed reality of the invention.
Fig. 2 is a flow chart of an implementation of mobile robot interaction and localization based on mixed reality of the present invention.
Fig. 3 is a schematic flow chart of creating a target recognition model by using a mobile robot sample image and performing real-time robot species recognition in an embodiment of the present invention.
Fig. 4 is a flowchart of an implementation of acquiring a coordinate conversion relationship between a mixed reality space and a mobile robot and a mixed reality space coordinate of the robot in an embodiment of the invention.
Fig. 5 is a schematic diagram of an implementation of mixed reality space localization of a mobile robot in an embodiment of the invention.
Fig. 6 is a schematic diagram of an implementation of an "initialization" process for obtaining a relative coordinate relationship of a mixed reality space coordinate system and a robot coordinate system in an embodiment of the invention.
FIG. 7 is a flowchart of an implementation of a remote interactive hand-related operation in an embodiment of the present invention.
Fig. 8 is an operation schematic diagram of operations such as selecting, clicking and the like performed by a user through remote interaction of hands in the embodiment of the present invention.
Description of the embodiments
The technical scheme of the invention is further described in detail below with reference to the attached drawings and the detailed description.
The mobile robot interaction and positioning method based on mixed reality provided in this embodiment, such as a method flowchart of mobile robot interaction and positioning based on mixed reality shown in fig. 1, and an implementation flowchart of mobile robot interaction and positioning based on mixed reality shown in fig. 2, includes the following steps:
s1, a target recognition model is established by utilizing a sample image of a mobile robot, and real-time robot species recognition is carried out through the target recognition model, wherein the specific process is as follows:
s1.1, training sample images of the mobile robot in various working environments by using a YOLOv5 neural network to obtain a target recognition model.
The mobile robot of the embodiment adopts intelligent crawler-type unmanned vehicle LKT2000 in Shandong national style, and a computing platform, a laser radar, a high-definition camera and a mobile power supply are additionally arranged on a chassis of the mobile robot, so that the mobile robot can have autonomous movement capability, communication capability and operation capability. As shown in fig. 3, a flow diagram of creating a target recognition model by using a mobile robot sample image and performing real-time robot species recognition is limited by the computational power and resources of a mixed reality device holonens 2, a distributed implementation scheme of a YOLOv5 recognition model at a PC end is adopted, specifically, a mobile robot is photographed at multiple angles under an indoor environment and an in-building aisle environment to obtain a sample image, a labelimg tool is used for labeling the acquired robot sample image, and a labeled sample set is formed according to a training set: verification set: dividing the test set into a ratio of 7:2:1, inputting the divided sample set into a YOLO neural network taking YOLOv5s as a basic model in Pycharm software, and training to generate a target recognition model of the mobile robot.
Among them, the object recognition based on the mixed reality device holonens 2 is achieved in two ways: on the one hand, experiments show that the identification based on the markers needs 25cm multiplied by 25cm QR codes at an identification distance of 0.5m, and the size of the QR codes is rapidly increased along with the increase of the identification distance, so that the identification based on the markers is not suitable for remote interaction of hands with effective distances of several meters; on the other hand, the YOLOv5 comprehensively considers the recognition speed and accuracy, can meet the real-time performance of robot recognition, and can accurately detect and classify the robots in a general indoor working environment.
S1.2, acquiring a scene from a PV camera of a mixed reality device holonens 2, shooting a video stream in real time, and carrying out target recognition on the video stream by utilizing a target recognition model at a computer end to obtain the type of the robot and obtain the image coordinates of the robot in each frame of image.
When the system runs, the PV camera of HoloLens2 continuously shoots the surrounding environment in real time to obtain a real-time video stream, the PC end acquires the URL address of the video stream from Windows Device Portal, the Pycharm software performs network streaming according to the URL address to input the real-time video stream into a target recognition model, the target recognition model recognizes the video stream in real time to obtain the type, the confidence coefficient and the image coordinates of the robot in each frame of image, and the result is fed back to the HoloLens2 through a message queue realized by Redis database technology.
The specific process of carrying out result feedback by using the message queue realized by the Redis database is as follows: and operating a Redis server at the PC end, converting the target recognition model into a JSON character string every time the target recognition model obtains a recognition result, and issuing the JSON character string at a set message queue channel by the PC end, wherein the mixed reality equipment HoloLens2 subscribes the channel and acquires the JSON character string from the channel through callback, so that the recognition result feedback of the mobile robot is realized.
S2, acquiring a coordinate conversion relation between a mixed reality space and a mobile robot based on a depth point cloud, and realizing the positioning of the robot in the mixed reality space, wherein the specific process is as follows:
s2.1, acquiring an internal reference matrix and an external reference matrix of a holonens 2 depth camera of the mixed reality equipment, acquiring a continuous depth frame stream, and mapping the points of an image coordinate system of the depth camera and the depth data of a depth frame data buffer zone one by one to obtain a depth point cloud of the environment.
In this embodiment, the depth camera of the mixed reality device holonens 2 has two working modes, one of which is an AHAT mode for user gesture recognition and tracking, namely an articulated hand tracking mode; secondly, a longThow mode, namely a long-focus mode, is used for long-distance space mapping sensing, the frame rate is 1-5 FPS, the distance is identified to be several meters, and the longThow mode is selected to be used for acquiring scene depth point clouds.
In addition, in this mode, the resolution of the depth frame is 320×288, and in this embodiment, each pixel of the depth frame is processed by using two conversion functions to obtain a mapping relationship between an image coordinate system point and a camera coordinate system point, and then an approximate depth camera reference matrix is obtained by mathematical calculation and derived, and the approximate matrix is regarded as an actual reference matrix of the depth camera.
In this embodiment, the mapping of the depth data in the depth camera image coordinate system point and the depth frame data buffer is based on a special mechanism of LongThrow mode. The depth camera frame in longwindow mode has a depth buffer, a sigma buffer for invalidating depth pixels, and an active brightness (Ab) buffer, where the sigma buffer embeds an invalidation code and confidence of 8 bits, and when the Most Significant Bit (MSB) is 1, the last 7 bits represent the invalidation cause, common invalidation causes are infrared signal saturation, multipath interference detected, out of maximum support range, etc. Specifically, the coordinates (u, v) of the points of the image coordinate system are used to obtain the corresponding points of the depth data, specifically converted intoAnd performing AND operation on the P+1st point of the sigma region and the invalidation mask 0x80, if the result is 0, invalidating the point data, further invalidating the point corresponding to the depth point cloud, and if the point corresponding to the depth point cloud is not 0, legal.
S2.2, acquiring an internal reference matrix and an external reference matrix of a holonens 2 PV camera of the mixed reality device, converting a coordinate system of the depth camera and the PV camera, completing mapping of image coordinates of the depth camera and image coordinates of the PV camera, obtaining a depth value of the robot, and converting the depth value according to a distance formula to obtain the mixed reality space coordinates of the robot.
The embodiment realizes the positioning of the mobile robot center point in the mixed reality space, and referring to the implementation flow chart for acquiring the coordinate conversion relation between the mixed reality space and the mobile robot and the mixed reality space coordinates of the robot shown in fig. 4 and the implementation schematic diagram for performing the positioning of the mobile robot in the mixed reality space shown in fig. 5, the specific process is as follows:
firstly, converting a depth camera and a PV camera coordinate system based on a D3D architecture. Taking the conversion of a PV camera coordinate system to a depth camera coordinate system as an example, under a D3D architecture, converting a PV camera image coordinate system to a camera coordinate system, converting to a Unity3D world coordinate system, recording coordinate values under the coordinate system for subsequent use, continuously converting to a RigNode coordinate system which is a camera reference coordinate system of HoloLens2 on the one hand, converting to a camera coordinate system of a depth camera, rounding down the coordinate values at the moment, and obtaining corresponding points under the depth camera image coordinate system through a MapCamera SpaceToImagePoint function. Among them, there are two approaches to the conversion of the PV camera coordinate system to the Unity3D world coordinate system: firstly, converting through an external reference matrix by using a RigNode coordinate system, and secondly, directly converting into a Unity coordinate system through a TryGetTransformTo function; the reason why the coordinate values in the depth camera coordinate system are rounded down is that the PV camera resolution is inconsistent with the depth camera resolution, and cannot be in one-to-one correspondence. The process of converting points of the depth camera coordinate system into the PV camera coordinate system is similar and will not be described in detail here.
Depth values of the mobile robot are then calculated based on a neighborhood averaging method. And converting the image coordinates of the mobile robot in the target recognition result into the image coordinates of the depth camera by the conversion of the camera coordinates, so as to obtain a depth value corresponding to the center point of the image of the mobile robot. In order to ensure the effectiveness and the accuracy of the depth value, the depth value of the robot is calculated by using a neighborhood average method, namely, in a depth point cloud, the depth value corresponds to a center point of a robot imageAnd taking the point as the center, fixing the r value as the radius, obtaining a field circle of the depth point cloud, firstly removing invalid depth points in the field circle, and calculating average depth values of all points which are similar to the depth values in the field including the center point by taking the depth value of the center point as a reference. Only the point close to the depth value of the center point is reserved, so that the obvious errors caused by two conditions that the corner point and the center point are corner points in the field can be reduced, and meanwhile, the accidental errors can be reduced by taking the average value of the neighborhood depths as the depth value of the center point. Furthermore, since the depth frame resolution is only 320288, one pixel point maps a small area in space, so that the neighborhood radius r should not be too large, in this embodiment r=1 is taken, that is, the depth values of 8 pixels around the center point are considered for the domain average calculation.
And finally, obtaining the mixed reality space coordinates of the mobile robot according to the distance formula. The obtained depth value corresponding to the center point of the image of the mobile robot is not an axial distance, the depth value represents the spatial distance D from the depth camera to the point, and the Z-axis coordinate value of the robot is obtained according to a distance formula and scale transformation by the following formula:
wherein,(X,Y)coordinates of the mobile robot in the x-axis and the y-axis of the mixed reality space.
S2.3, carrying out a process called initialization of the method: the current advancing direction of the mobile robot is adjusted to be a certain coordinate axis direction of a robot coordinate system, so that the robot moves forwards by a fixed distance, the mixed reality space coordinates of the two positions of the robot are obtained, meanwhile, the two positions of the robot in the coordinate system are obtained from the robot through ROS node communication, and the conversion relation between the mixed reality space coordinate system and the robot coordinate system is calculated by combining the coordinate axis direction of the robot coordinate system, the moving distance and the mixed reality space coordinates of the two positions.
The embodiment realizes the position of the mobile robotAnd converting a space coordinate system of the robot-mixed reality equipment under the pose information. The robot coordinate system and the mixed reality coordinate system have the following common points: (1) taking m as a scale unit; (2) the mobile robots considered by the method are not started robots, and the front of the robot is the forward direction or the reverse direction of a certain coordinate axis, so that a translation matrix T and a rotation matrix R can be obtained through the initialization process. Specifically, referring to fig. 6, a schematic diagram of an initialization process for obtaining a relative coordinate relationship between a mixed reality space coordinate system and a robot coordinate system is shown at an initial position of a mobile robotRobot coordinates in a robot coordinate system are obtained from the robot through ROS communication, coordinate values of the robot in two space coordinate systems are obtained by combining the embodiment S3.2, and a moving command is sent through ROS communication to enable the robot to move 0.5m forwards, wherein the point is +_0>Through target recognition and HoloLens2 coordinate system conversion ROS communication is made->Coordinate values of the robot under two space coordinate systems are obtained. Finally, combining the obtained two-point coordinate values of the robot in the two coordinate systems, the same direction of the Z axis of the robot and the Y axis of the mixed reality coordinate system, the moving distance of the robot, the change of the angle between the observation line and the X axis of the robot coordinate system and the like, and obtaining the conversion relation between the mixed reality space coordinate system and the mobile robot coordinate system through mathematical calculation.
Specifically, in the two-space coordinate system conversion relationship obtained by mathematical calculation, feature point matching and mismatching thereof are eliminated for precision consideration. The "initialization" process involves two approximate operations, one in which the image center of the mobile robot is considered as the centroid or centroid of the robot, and the other in which the robot is considered as being in the initial positionAnd post-movement position->The center of the image of (2) is the same point on the robot, wherein the first approximation operation is approximation processing in a pure vision mode, errors exist due to different perspective projection observation conditions such as the distance between a user and the robot, the observation gesture of the user, the light change of the environment and the like, and the errors are difficult to eliminate through a monocular PV camera of Holoens 2; the second approximation is a systematic error caused by the process design, and thus systematic error cancellation is performed on the second approximation: point->And->And further carrying out ORB characteristic point matching on the shot frames of the two frames, carrying out error matching elimination on ORB matching results by using a RANSAC algorithm to obtain matching point pairs in the two frames, wherein each point pair represents the same point on the robot, obtaining two space coordinate observation values of the point pairs, and predicting that the distance between the two observation values is 0.5m of the fixed moving distance of the robot at the moment, so that an accurate space coordinate system conversion relation is obtained through calculation.
S3, selecting a mobile robot through remote interaction of hands and space clicking, and setting a robot mobile target point through space clicking, wherein the specific process is as follows:
s3.1, according to the target recognition result, combining the coordinates of the robot in the mixed reality space obtained in the step S2, and placing a minimum hexahedral bounding box corresponding to the robot in the mixed reality scene, wherein the bounding box is invisible to a user but can be interacted with.
S3.2, the user points to and clicks the mobile robot through hand remote interaction, virtual hand rays of the hand interaction collide with the bounding box of the robot, the selected robot individual is determined, and the selection operation of the mobile robot is achieved.
S3.3, the user points to and clicks a mobile target point in the real scene through hand remote interaction, the specific position pointed by the virtual hand ray prompt of the hand interaction is executed, and the mixed reality equipment HoloLens2 marks the point as the mobile target point of the mobile robot in the mixed reality scene.
Specifically, the specific process of pointing and clicking the robot and moving the target point through the remote interaction of the hands is as follows:
s3.3.1 raising the arm and spreading the palm to face the palm center to the real scene, the mixed reality device holonens 2 recognizes the gesture as a "pointing" gesture.
In this embodiment, as shown in the implementation flowchart of the hand remote interaction related operation shown in fig. 7, the implementation of the hand remote interaction is based on an MRTK API tool kit, specifically, an MRTK3 tool kit is used, and the joint model of the user hand is identified by an armdescription handle controller class, so as to obtain the interaction information input by the gesture, action and other hands of the user hand, and update the position and input of the hand in real time according to the state of the hand; acquiring recognition result data of HoloLens2 on user gestures from an IPoseS source interface; the MRTKRayInteractor realizes the palm rays of the hands of the user, the rays can interact with interactable objects in a scene, contact point position information in interaction can be obtained, and the design of remote interaction of the hands is completed by writing and subscribing interaction events through the MRTKRayInteractor.
S3.3.2 the virtual hand light is emitted from the palm center, the light is in contact with the real space surface model, the light is a dotted line when pointing, and a hollow circular aperture is generated at the contact point to prompt the space position of the pointed point.
S3.3.3 kneading thumb and index finger, changing the virtual hand light into solid line, the aperture of the contact point of the light and space is reduced and changed into solid circle, then loosening thumb and index finger, restoring the light into dotted line, restoring the hollow of the contact aperture, and completing one click action of the space point comprising the robot and the moving target point.
The embodiment realizes a basic interaction function based on hand remote interaction, wherein the basic interaction function mainly refers to click selection of a robot and click setting of a mobile target point. The mixed reality equipment holocens 2 supports basic interaction functions including pointing, clicking, dragging and the like, hand remote interaction is inherited from the basic implementation class of holocens 2 gesture recognition, the pointing is that when a user spreads out the palm and directs the palm center to a target object, when the distance between the hand and the palm center of the user to the target object is greater than 50cm, virtual hand light rays are emitted from the palm center of the hand, the light rays are dotted lines, and the contact point between the tail end of the light rays and the target object is a hollow circular aperture; the click means that in the pointing state, the user kneads the thumb and the index finger, the palm light becomes a solid line and the aperture becomes solid, and then the thumb and the index finger are released, and the action indicates that the user selects the target object. The operation schematic diagram of this embodiment is shown in fig. 8, in which the user performs operations such as selecting and clicking through remote interaction with the hand, and for the user, only "selecting" and "clicking" operations are performed.
S4, obtaining a space surface model of the environment through the mixed reality equipment, and obtaining mixed reality space coordinates of a mobile target point by combining virtual hand light of hand remote interaction, wherein the specific process is as follows:
s4.1, the mixed reality device holonens 2 scans the real environment and builds a space surface model of the environment.
The embodiment uses a spatial mapping technology of a mixed reality device holonens 2 to obtain a surface model of a real environment. When HoloLens2 is started, the surrounding environment of the device is scanned once, a triangular patch structure model of the environment surface is built by the scanning once, then a mesh grid of the space surface is obtained, and after that, every time a user clicks any point in the real environment, holoLens2 can perform scanning once and update of the mesh network of the space surface. The frequency of the holonens 2 scanning space is set to be 30s once, so that the holonens 2 automatically updates the mesh grid on the space surface to further cope with the changed working environment. And adding a space mapping management class ARMeshmanager, setting the triangular patch density of the space surface mesh grid, and changing the triangular patch density into an interactive mode.
Further, a pre-scanning link is added to improve the fineness of the space surface model. The fineness of the grid can be improved by improving the triangular patch density of the space surface mesh grid, but the calculated amount can be increased rapidly, and the simple method of improving the triangular patch density means that the space surface mesh grid which needs to be constructed and updated in real time can bring huge calculation pressure to space rendering, thereby influencing the normal operation of the whole system. Considering that the usually established mobile target point is positioned on the ground, the ground is a fixed object in the environment, regular scanning and updating are not needed, and other fixed objects in the environment are also the same, so that the environment is pre-scanned once in a high-density configuration, the fixed environmental objects are marked, the mesh grids of the objects are used as substrates for real-time scanning, the equipment only needs to perform high-precision mesh construction on the non-fixed objects, and the calculation resource requirements of space surface model construction and updating are greatly reduced while the precision of the space surface mesh grids is ensured.
And S4.2, acquiring intersection point data of the virtual hand light and the space surface model, and obtaining the mixed reality space coordinates of the moving target point.
The embodiment obtains the mixed reality space coordinates of the moving target point based on the remote interaction of the hands. Constructing a custom class HandRaycast, defining palm rays of the right hand of a user in the class, further defining a clicking event OnRayHit of the rays, and acquiring a Select Enterprise parameters from a spatial surface mesh and a ray MRTKRayInteractor class by the OnRayHit event when the palm rays perform a clicking action on the interactable object, wherein the parameters comprise the point information and transformation information of the interactable object and the interacted object, so that intersection point data of the palm rays and the spatial surface mesh can be obtained. It is noted that, in this embodiment, the intersection point of the hand ray of the user and the spatial surface model is regarded as a moving target point in the real space, and the denser the mesh grid of the spatial surface model is set, the more accurate the obtained intersection point coordinates are, and the smaller the error of the approximation process is.
And S5, obtaining coordinates of the target point and the current position of the robot under the robot coordinate system by utilizing the coordinate conversion relation.
S6, generating a moving instruction according to the coordinates of the robot and the target point in the robot coordinate system so as to guide the moving robot to move the target point.
In the embodiment, generation and encapsulation of a movement instruction are completed at a holonens 2 end of the mixed reality device. Dividing a moving instruction into a state quantity, a behavior command and a data quantity thereof, wherein the state quantity refers to the current coordinate of the mobile robot, the behavior command comprises three basic moving operations of 'forward', 'backward' and 'stop' of the mobile robot, and the data quantity is a quantitative description of the behavior command: the displacement distance of the mobile robot is specified by the data quantity of the forward movement and the backward movement, the unit is m, and the default value is 0.1m; the amount of data for "stop" specifies how many seconds later the mobile robot will stop, with default value of 0s by default, i.e. stop immediately. HoloLens2 converts the generated movement instruction into a JSON character string, issues the message to a designated channel through a message queue of Redis, receives the message from the channel by a PC end, and sends the message to the mobile robot through ROS node communication, so that the current position and the target point position of the robot are defined, and the robot moves to the target point through the autonomous movement capability of the robot. It should be noted that, since the mobile command is directly received and processed by the mobile robot, the current coordinates and the behavior command of the robot in the mobile command are based on the robot coordinate system and the robot view angle.
In summary, the mobile robot man-machine interaction and positioning method based on mixed reality realizes a natural and visual interaction mode, a user only needs to wear mixed reality equipment holonens 2, the mobile operation of the mobile robot can be completed through the spaced clicking action of hands, meanwhile, the system response is rapid due to the characteristics of YOLOv5, redis message queues and ORB feature matching, the data is more accurate due to the fact that the space surface model updated by fixed frequency is matched with feature points and mismatching is eliminated, the user directly interacts and controls the mobile robot through gesture interaction conforming to human operation habits, the interaction friendliness and the control efficiency of the user control mobile robot are improved, the method is remarkably improved in the aspects of intelligentization degree and interaction naturalness compared with the traditional mode, and the method is suitable for mobile robot operation in most indoor environments, industrial environments and outdoor small ranges, and has high practicability.
The foregoing embodiments are merely illustrative of the principles and functions of the present invention, and any of the features disclosed in this specification may be replaced by alternative features serving the equivalent or similar purpose, unless expressly stated otherwise; all of the features disclosed, or all of the steps in a method or process, except for mutually exclusive features and/or steps, may be combined in any manner.
Claims (2)
1. A mobile robot interaction and positioning method based on mixed reality is characterized by comprising the following steps:
step S1: establishing a target recognition model by using a sample image of the mobile robot, and performing real-time robot species recognition by using the target recognition model, wherein the recognition scheme is that a PV of a mixed reality device HoloLens2 acquires a video stream, the target recognition model deployed at a computer end performs real-time recognition, and a recognition result comprises the species of the robot and the image coordinates of the robot in each frame of image;
step S2: acquiring a coordinate conversion relation between a mixed reality space and a mobile robot based on the depth point cloud, and realizing the positioning of the robot in the mixed reality space; the specific process is as follows:
step 2.1: obtaining an internal reference matrix and an external reference matrix of a holonens 2 depth camera of a mixed reality device, obtaining a continuous depth frame stream, and mapping the points of an image coordinate system of the depth camera and the depth data of a depth frame data buffer area one by one to obtain a depth point cloud of the environment;
step 2.2: obtaining an internal reference matrix and an external reference matrix of a holonens 2 PV camera of the mixed reality equipment, converting a coordinate system of a depth camera and a coordinate system of the PV camera, completing mapping of image coordinates of the depth camera and image coordinates of the PV camera, obtaining a depth value of the robot, and converting the depth value according to a distance formula to obtain a mixed reality space coordinate of the robot;
step 2.3: an "initialization" process is performed: the current advancing direction of the mobile robot is adjusted to be a certain coordinate axis direction of a robot coordinate system, so that the robot moves forwards by a fixed distance, mixed reality space coordinates of two positions of the robot are obtained, meanwhile, two positions of the robot in the robot coordinate system are obtained from the robot through ROS node communication, and the conversion relation between the mixed reality space coordinate system and the robot coordinate system is calculated by combining the coordinate axis direction of the robot coordinate system, the moving distance and the mixed reality space coordinates of the two positions;
step S3: selecting a mobile robot through remote interaction space clicking of a hand, and setting a robot mobile target point through space clicking; the specific process is as follows:
step 3.1: according to the target recognition result, combining the coordinates of the robot in the mixed reality space obtained in the step S2, and placing a minimum hexahedral bounding box corresponding to the robot in the mixed reality scene, wherein the bounding box is invisible to a user but can be interacted with;
step 3.2: the user points to and clicks the mobile robot through hand remote interaction, virtual hand rays interacted by hands collide with the bounding box of the robot, the selected robot individual is determined, and the selection operation of the mobile robot is realized;
step 3.3: the user points and clicks a mobile target point in a real scene through hand remote interaction, a specific position pointed by a virtual hand ray prompt of the hand interaction is clicked, and a mixed reality device HoloLens2 marks the point as a mobile target point of the mobile robot in the mixed reality scene;
step S4: scanning by using mixed reality equipment to obtain a space surface model of the environment, and acquiring mixed reality space coordinates of a moving target point by combining virtual hand light rays of hand remote interaction; the space surface model is a mesh grid based on a triangular patch model obtained by scanning a real environment by using a mixed reality device HoloLens2, and mixed reality space coordinates of a moving target point are obtained from intersection point data of virtual hand rays and the space surface model;
step S5: obtaining a target point and a robot coordinate under a robot coordinate system by utilizing a coordinate conversion relation, and realizing space positioning under the robot coordinate system;
step S6: generating a moving instruction according to the coordinates of the robot and the target point in the robot coordinate system so as to guide the moving robot to the moving target point; the moving instruction comprises the current position of the robot, the position of a moving target point and command information of 'forward' and 'stop' under a robot coordinate system;
the specific process of mesh grid construction based on triangular patch model in the step S4 is as follows:
step a: scanning a fine environment mesh grid in advance, and marking an environment object which is fixed in the environment mesh grid and is commonly used for setting up a target point;
step b: the system scans the environment in real time during operation, improves the density of triangular patches of the mesh grid of the object in the non-fixed environment, combines fine description of the fixed object by pre-scanning, and realizes high-precision dynamic construction of the mesh grid under lower computing power resources, thereby improving the positioning precision of the moving target point.
2. The mobile robot interaction and positioning method based on mixed reality as claimed in claim 1, wherein the depth value of the robot is calculated by adopting a neighborhood averaging method in the step S2, and the detailed method is as follows:
step a: in the depth point cloud, a field circle is determined by taking a corresponding point of a robot depth image coordinate as a center and fixing an r value as a radius, and invalid points of depth values in the field circle are removed;
step b: and comparing the depth values of the center point and the neighborhood points, and calculating the average value of the depths of the center point and the points with the difference between the depth values of the center point and the depth values of the center point being smaller than a set threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311287066.9A CN117021117B (en) | 2023-10-08 | 2023-10-08 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311287066.9A CN117021117B (en) | 2023-10-08 | 2023-10-08 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117021117A CN117021117A (en) | 2023-11-10 |
CN117021117B true CN117021117B (en) | 2023-12-15 |
Family
ID=88630313
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311287066.9A Active CN117021117B (en) | 2023-10-08 | 2023-10-08 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117021117B (en) |
Citations (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102350700A (en) * | 2011-09-19 | 2012-02-15 | 华南理工大学 | Method for controlling robot based on visual sense |
CA2928262A1 (en) * | 2010-12-30 | 2012-07-05 | Irobot Corporation | Mobile robot system |
JP2014104527A (en) * | 2012-11-27 | 2014-06-09 | Seiko Epson Corp | Robot system, program, production system, and robot |
JP2014149640A (en) * | 2013-01-31 | 2014-08-21 | Tokai Rika Co Ltd | Gesture operation device and gesture operation program |
CN105739702A (en) * | 2016-01-29 | 2016-07-06 | 电子科技大学 | Multi-posture fingertip tracking method for natural man-machine interaction |
CN105843371A (en) * | 2015-01-13 | 2016-08-10 | 上海速盟信息技术有限公司 | Man-machine space interaction method and system |
CN106125932A (en) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | The recognition methods of destination object, device and mobile terminal in a kind of augmented reality |
US9552056B1 (en) * | 2011-08-27 | 2017-01-24 | Fellow Robots, Inc. | Gesture enabled telepresence robot and system |
KR20170055687A (en) * | 2015-11-12 | 2017-05-22 | 한국과학기술연구원 | Apparatus and method for implementing motion of robot using knowledge model |
CN106767833A (en) * | 2017-01-22 | 2017-05-31 | 电子科技大学 | A kind of robot localization method of fusion RGBD depth transducers and encoder |
CN108303994A (en) * | 2018-02-12 | 2018-07-20 | 华南理工大学 | Team control exchange method towards unmanned plane |
EP3411195A1 (en) * | 2016-02-05 | 2018-12-12 | ABB Schweiz AG | Controlling an industrial robot using interactive commands |
CN109634300A (en) * | 2018-11-23 | 2019-04-16 | 中国运载火箭技术研究院 | Based on the multiple no-manned plane control system and method every empty-handed gesture and ultrasonic wave touch feedback |
CN110238857A (en) * | 2019-07-11 | 2019-09-17 | 深圳市三宝创新智能有限公司 | A kind of robot gesture control method and device |
CN110480635A (en) * | 2019-08-09 | 2019-11-22 | 中国人民解放军国防科技大学 | A kind of control method and control system for multirobot |
CN111179341A (en) * | 2019-12-09 | 2020-05-19 | 西安交通大学 | Registration method of augmented reality equipment and mobile robot |
CN111507246A (en) * | 2020-04-15 | 2020-08-07 | 上海幂方电子科技有限公司 | Method, device, system and storage medium for selecting marked object through gesture |
CN111805546A (en) * | 2020-07-20 | 2020-10-23 | 中国人民解放军国防科技大学 | Human-multi-robot sharing control method and system based on brain-computer interface |
CN111857345A (en) * | 2020-07-23 | 2020-10-30 | 上海纯米电子科技有限公司 | Gesture-based control method and device |
CN112088070A (en) * | 2017-07-25 | 2020-12-15 | M·奥利尼克 | System and method for operating a robotic system and performing robotic interactions |
CN112381953A (en) * | 2020-10-28 | 2021-02-19 | 华南理工大学 | Rapid selection method of three-dimensional space unmanned aerial vehicle cluster |
WO2021040214A1 (en) * | 2019-08-27 | 2021-03-04 | 주식회사 케이티 | Mobile robot and method for controlling same |
WO2021073733A1 (en) * | 2019-10-16 | 2021-04-22 | Supsi | Method for controlling a device by a human |
CN113043282A (en) * | 2019-12-12 | 2021-06-29 | 牧今科技 | Method and system for object detection or robot interactive planning |
CN113478485A (en) * | 2021-07-06 | 2021-10-08 | 上海商汤智能科技有限公司 | Robot, control method and device thereof, electronic device and storage medium |
CN114025700A (en) * | 2019-06-28 | 2022-02-08 | 奥瑞斯健康公司 | Console overlay and method of use |
KR20220018795A (en) * | 2020-08-07 | 2022-02-15 | 네이버랩스 주식회사 | Remote control method and system for robot |
WO2022040954A1 (en) * | 2020-08-26 | 2022-03-03 | 南京智导智能科技有限公司 | Ar spatial visual three-dimensional reconstruction method controlled by means of gestures |
CN114155288A (en) * | 2020-09-07 | 2022-03-08 | 南京智导智能科技有限公司 | AR space visual three-dimensional reconstruction method controlled through gestures |
CN114281190A (en) * | 2021-12-14 | 2022-04-05 | Oppo广东移动通信有限公司 | Information control method, device, system, equipment and storage medium |
CN114384848A (en) * | 2022-01-14 | 2022-04-22 | 北京市商汤科技开发有限公司 | Interaction method, interaction device, electronic equipment and storage medium |
CN114527669A (en) * | 2022-01-12 | 2022-05-24 | 深圳绿米联创科技有限公司 | Equipment control method and device and electronic equipment |
CN114594792A (en) * | 2015-09-15 | 2022-06-07 | 深圳市大疆创新科技有限公司 | Device and method for controlling a movable object |
CN115351782A (en) * | 2022-07-27 | 2022-11-18 | 江门市印星机器人有限公司 | Multi-robot control method and device based on edge calculation and storage medium |
WO2023016174A1 (en) * | 2021-08-12 | 2023-02-16 | 青岛小鸟看看科技有限公司 | Gesture operation method and apparatus, and device and medium |
KR20230100101A (en) * | 2021-12-28 | 2023-07-05 | 주식회사 케이티 | Robot control system and method for robot setting and robot control using the same |
CN116476074A (en) * | 2023-06-07 | 2023-07-25 | 广西大学 | Remote mechanical arm operation system based on mixed reality technology and man-machine interaction method |
CN116520991A (en) * | 2023-05-04 | 2023-08-01 | 西安建筑科技大学 | VR and eye movement based human-cluster robot interaction method and system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014138472A2 (en) * | 2013-03-06 | 2014-09-12 | Robotex Inc. | System and method for collecting and processing data and for utilizing robotic and/or human resources |
JP6141108B2 (en) * | 2013-06-07 | 2017-06-07 | キヤノン株式会社 | Information processing apparatus and method |
WO2014210502A1 (en) * | 2013-06-28 | 2014-12-31 | Chia Ming Chen | Controlling device operation according to hand gestures |
US9283674B2 (en) * | 2014-01-07 | 2016-03-15 | Irobot Corporation | Remotely operating a mobile robot |
US9643314B2 (en) * | 2015-03-04 | 2017-05-09 | The Johns Hopkins University | Robot control, training and collaboration in an immersive virtual reality environment |
KR20190106939A (en) * | 2019-08-30 | 2019-09-18 | 엘지전자 주식회사 | Augmented reality device and gesture recognition calibration method thereof |
EP4080311A1 (en) * | 2021-04-23 | 2022-10-26 | Carnegie Robotics, LLC | A method of operating one or more robots |
CN114417616A (en) * | 2022-01-20 | 2022-04-29 | 青岛理工大学 | Digital twin modeling method and system for assembly robot teleoperation environment |
-
2023
- 2023-10-08 CN CN202311287066.9A patent/CN117021117B/en active Active
Patent Citations (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2928262A1 (en) * | 2010-12-30 | 2012-07-05 | Irobot Corporation | Mobile robot system |
US9552056B1 (en) * | 2011-08-27 | 2017-01-24 | Fellow Robots, Inc. | Gesture enabled telepresence robot and system |
CN102350700A (en) * | 2011-09-19 | 2012-02-15 | 华南理工大学 | Method for controlling robot based on visual sense |
JP2014104527A (en) * | 2012-11-27 | 2014-06-09 | Seiko Epson Corp | Robot system, program, production system, and robot |
JP2014149640A (en) * | 2013-01-31 | 2014-08-21 | Tokai Rika Co Ltd | Gesture operation device and gesture operation program |
CN105843371A (en) * | 2015-01-13 | 2016-08-10 | 上海速盟信息技术有限公司 | Man-machine space interaction method and system |
CN114594792A (en) * | 2015-09-15 | 2022-06-07 | 深圳市大疆创新科技有限公司 | Device and method for controlling a movable object |
KR20170055687A (en) * | 2015-11-12 | 2017-05-22 | 한국과학기술연구원 | Apparatus and method for implementing motion of robot using knowledge model |
CN105739702A (en) * | 2016-01-29 | 2016-07-06 | 电子科技大学 | Multi-posture fingertip tracking method for natural man-machine interaction |
EP3411195A1 (en) * | 2016-02-05 | 2018-12-12 | ABB Schweiz AG | Controlling an industrial robot using interactive commands |
CN106125932A (en) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | The recognition methods of destination object, device and mobile terminal in a kind of augmented reality |
CN106767833A (en) * | 2017-01-22 | 2017-05-31 | 电子科技大学 | A kind of robot localization method of fusion RGBD depth transducers and encoder |
CN112088070A (en) * | 2017-07-25 | 2020-12-15 | M·奥利尼克 | System and method for operating a robotic system and performing robotic interactions |
CN108303994A (en) * | 2018-02-12 | 2018-07-20 | 华南理工大学 | Team control exchange method towards unmanned plane |
CN109634300A (en) * | 2018-11-23 | 2019-04-16 | 中国运载火箭技术研究院 | Based on the multiple no-manned plane control system and method every empty-handed gesture and ultrasonic wave touch feedback |
CN114025700A (en) * | 2019-06-28 | 2022-02-08 | 奥瑞斯健康公司 | Console overlay and method of use |
CN110238857A (en) * | 2019-07-11 | 2019-09-17 | 深圳市三宝创新智能有限公司 | A kind of robot gesture control method and device |
CN110480635A (en) * | 2019-08-09 | 2019-11-22 | 中国人民解放军国防科技大学 | A kind of control method and control system for multirobot |
WO2021040214A1 (en) * | 2019-08-27 | 2021-03-04 | 주식회사 케이티 | Mobile robot and method for controlling same |
WO2021073733A1 (en) * | 2019-10-16 | 2021-04-22 | Supsi | Method for controlling a device by a human |
CN111179341A (en) * | 2019-12-09 | 2020-05-19 | 西安交通大学 | Registration method of augmented reality equipment and mobile robot |
CN113043282A (en) * | 2019-12-12 | 2021-06-29 | 牧今科技 | Method and system for object detection or robot interactive planning |
CN111507246A (en) * | 2020-04-15 | 2020-08-07 | 上海幂方电子科技有限公司 | Method, device, system and storage medium for selecting marked object through gesture |
CN111805546A (en) * | 2020-07-20 | 2020-10-23 | 中国人民解放军国防科技大学 | Human-multi-robot sharing control method and system based on brain-computer interface |
CN111857345A (en) * | 2020-07-23 | 2020-10-30 | 上海纯米电子科技有限公司 | Gesture-based control method and device |
KR20220018795A (en) * | 2020-08-07 | 2022-02-15 | 네이버랩스 주식회사 | Remote control method and system for robot |
WO2022040954A1 (en) * | 2020-08-26 | 2022-03-03 | 南京智导智能科技有限公司 | Ar spatial visual three-dimensional reconstruction method controlled by means of gestures |
CN114155288A (en) * | 2020-09-07 | 2022-03-08 | 南京智导智能科技有限公司 | AR space visual three-dimensional reconstruction method controlled through gestures |
CN112381953A (en) * | 2020-10-28 | 2021-02-19 | 华南理工大学 | Rapid selection method of three-dimensional space unmanned aerial vehicle cluster |
CN113478485A (en) * | 2021-07-06 | 2021-10-08 | 上海商汤智能科技有限公司 | Robot, control method and device thereof, electronic device and storage medium |
WO2023016174A1 (en) * | 2021-08-12 | 2023-02-16 | 青岛小鸟看看科技有限公司 | Gesture operation method and apparatus, and device and medium |
CN114281190A (en) * | 2021-12-14 | 2022-04-05 | Oppo广东移动通信有限公司 | Information control method, device, system, equipment and storage medium |
KR20230100101A (en) * | 2021-12-28 | 2023-07-05 | 주식회사 케이티 | Robot control system and method for robot setting and robot control using the same |
CN114527669A (en) * | 2022-01-12 | 2022-05-24 | 深圳绿米联创科技有限公司 | Equipment control method and device and electronic equipment |
CN114384848A (en) * | 2022-01-14 | 2022-04-22 | 北京市商汤科技开发有限公司 | Interaction method, interaction device, electronic equipment and storage medium |
CN115351782A (en) * | 2022-07-27 | 2022-11-18 | 江门市印星机器人有限公司 | Multi-robot control method and device based on edge calculation and storage medium |
CN116520991A (en) * | 2023-05-04 | 2023-08-01 | 西安建筑科技大学 | VR and eye movement based human-cluster robot interaction method and system |
CN116476074A (en) * | 2023-06-07 | 2023-07-25 | 广西大学 | Remote mechanical arm operation system based on mixed reality technology and man-machine interaction method |
Non-Patent Citations (2)
Title |
---|
多模态人机交互综述;陶建华等;《中国图像图形学报》;全文 * |
智能汽车中人工智能算法应用及其安全综述;赵洋等;《电子科技大学学报》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117021117A (en) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Segen et al. | Shadow gestures: 3D hand pose estimation using a single camera | |
TWI662388B (en) | Obstacle avoidance control system and method for a robot | |
KR101865655B1 (en) | Method and apparatus for providing service for augmented reality interaction | |
KR101876419B1 (en) | Apparatus for providing augmented reality based on projection mapping and method thereof | |
US6624833B1 (en) | Gesture-based input interface system with shadow detection | |
CN111694429A (en) | Virtual object driving method and device, electronic equipment and readable storage | |
Lee et al. | 3D natural hand interaction for AR applications | |
US8866740B2 (en) | System and method for gesture based control system | |
CN110362193A (en) | With hand or the method for tracking target and system of eyes tracking auxiliary | |
KR20120014925A (en) | Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose | |
CN109116984B (en) | Tool box for three-dimensional interactive scene | |
CN111241940A (en) | Remote control method of robot and human body boundary frame determination method and system | |
Angelopoulos et al. | Drone brush: Mixed reality drone path planning | |
CN117021117B (en) | Mobile robot man-machine interaction and positioning method based on mixed reality | |
CN111784842B (en) | Three-dimensional reconstruction method, device, equipment and readable storage medium | |
WO2024066756A1 (en) | Interaction method and apparatus, and display device | |
CN111369571B (en) | Three-dimensional object pose accuracy judging method and device and electronic equipment | |
CN108363494A (en) | A kind of mouse input system based on virtual reality system | |
CN116631262A (en) | Man-machine collaborative training system based on virtual reality and touch feedback device | |
CN116301321A (en) | Control method of intelligent wearable device and related device | |
Leubner et al. | Computer-vision-based human-computer interaction with a back projection wall using arm gestures | |
CN115619990A (en) | Three-dimensional situation display method and system based on virtual reality technology | |
CN114373016A (en) | Method for positioning implementation point in augmented reality technical scene | |
CN107930121B (en) | Method and device for controlling game target to move according to input device type | |
CN112454363A (en) | Control method of AR auxiliary robot for welding operation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |