CN116277005A - Multi-machine teleoperation display control method, device and equipment - Google Patents

Multi-machine teleoperation display control method, device and equipment Download PDF

Info

Publication number
CN116277005A
CN116277005A CN202310302127.8A CN202310302127A CN116277005A CN 116277005 A CN116277005 A CN 116277005A CN 202310302127 A CN202310302127 A CN 202310302127A CN 116277005 A CN116277005 A CN 116277005A
Authority
CN
China
Prior art keywords
robot
picture
cooperative
cooperative robot
working
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310302127.8A
Other languages
Chinese (zh)
Inventor
潘幸
黄世华
何宇星
吴安锦
石炜烨
许晋诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Passini Perception Technology Shenzhen Co ltd
Original Assignee
Passini Perception Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Passini Perception Technology Shenzhen Co ltd filed Critical Passini Perception Technology Shenzhen Co ltd
Priority to CN202310302127.8A priority Critical patent/CN116277005A/en
Publication of CN116277005A publication Critical patent/CN116277005A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application belongs to the technical field of teleoperation, and relates to a multi-machine teleoperation display control method, wherein a multi-machine comprises a robot to be remotely controlled and a cooperative robot outside the robot to be remotely controlled, and the method comprises the following steps: acquiring a working picture of a robot to be remotely controlled; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot; and sending the working picture to a display for display. The application also provides a teleoperation display control device, a computer device and a storage medium. By adopting the technical scheme, the remote operation task assisted by multiple machines can be better assisted by an operator in the process of the cooperation of the multiple machines.

Description

Multi-machine teleoperation display control method, device and equipment
Technical Field
The present application relates to the field of teleoperation technologies, and in particular, to a method, an apparatus, and a device for controlling a multi-machine teleoperation display.
Background
With the development of technology, the application field of robots is becoming wider, and in some complex and dangerous environments, robots are required to have higher flexibility and higher humanoid working capacity, and for this reason, teleoperation robots are being developed. The teleoperation robot is characterized in that a gesture sensor, such as an inertial sensor (Inertial Measurement Unit IMU), is arranged on an operator, the operator performs target task operation in another real or virtual scene, the IMU captures the action of the operator in the operation process and collects corresponding motion data to be sent to a controller, and the controller generates corresponding motion control instructions and the like according to the motion data so as to control the slave robot, so that the teleoperation purpose is achieved.
However, currently, for a teleoperation technology that a plurality of robots on the slave side cooperatively complete a target task, a relatively effective cooperative control or auxiliary control method is often lacking.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device and equipment for controlling multi-machine teleoperation display, so as to better assist an operator to complete teleoperation tasks assisted by multiple machines in the multi-machine cooperation process.
In a first aspect, an embodiment of the present application provides a method for controlling a multi-machine teleoperation display, which adopts the following technical scheme:
a multi-machine teleoperation display control method, the multi-machine including a remotely controlled robot and a collaborative robot external to the remotely controlled robot, the method comprising:
acquiring a working picture of a robot to be remotely controlled; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot;
and sending the working picture to a display for display.
Further, in an embodiment, the working picture further includes a robot to be remotely controlled; or (b)
Before the working picture is sent to a display for displaying, the method further comprises the following steps:
acquiring state information of a robot to be remotely controlled;
Based on the state information, a state indication of the remote controlled robot is generated in the working picture in combination with a coordinate conversion relation between an image sensor and the remote controlled robot.
Further, in one embodiment, before the working picture of the robot to be remotely controlled is obtained when the cooperative robot is in motion, the method further includes the following steps:
acquiring an initial working picture of a remote controlled robot;
judging whether a cooperative robot exists in the initial working picture;
if the initial working picture does not contain the cooperative robot, acquiring state information of the cooperative robot; generating a state indication of the cooperative robot in the initial work screen based on the state information to obtain the work screen including the state indication of the cooperative robot;
and if the cooperative robot exists in the initial working picture, taking the initial working picture as the working picture comprising the cooperative robot.
Further, in one embodiment, the determining whether the collaborative robot exists in the initial working screen includes the following steps:
solving the gesture of the cooperative robot;
Acquiring a visual field range of an image sensor;
judging whether the gesture of the cooperative robot is positioned outside the visual field range;
if the robot leaves the visual field range, the robot is regarded as not having a cooperative robot in the initial working picture; or (b)
Judging whether the initial working picture is recognized to obtain a cooperative robot or not;
and if the cooperative robot is not identified, the cooperative robot is not found in the initial working picture.
Further, in an embodiment, the acquiring the state information of the cooperative robot includes the following steps:
acquiring joint motion information of a cooperative robot; based on the joint motion information, calculating the gesture of the tail end of the cooperative robot; taking the gesture of the tail end of the cooperative robot as state information of the cooperative robot; or (b)
And acquiring the state information of the cooperative robot acquired and sent by the position sensor.
Further, in one embodiment, the generating the status indication of the collaborative robot in the work screen based on the status information includes:
acquiring the current position of the cooperative robot in a current frame work picture;
acquiring a previous frame position of the cooperative robot in a previous frame work picture;
Constructing an indication mark from the last position to the current position in the current frame work picture; or (b)
Acquiring the current position of the cooperative robot in a current frame work picture;
extracting a picture boundary closest to the current position in a current frame working picture;
and marking the picture boundary with an indication mark.
In a second aspect, an embodiment of the present application provides a multi-machine teleoperation display control device, including:
the image acquisition module is used for acquiring a working picture of the robot to be remotely controlled; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot;
and the image display module is used for sending the working picture to a display for display.
In a third aspect, embodiments of the present application provide a teleoperation system, including: the robot comprises an image sensor, a robot to be remotely controlled, a cooperative robot, a display and a controller;
the image sensor is used for collecting an initial working picture of the remote control robot and sending the initial working picture to the controller;
the controller is used for acquiring a working picture; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot; the working picture is sent to a display for display; the working picture is the initial working picture or the working picture is obtained after the initial working picture is processed.
In a fourth aspect, embodiments of the present application provide a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the multi-machine teleoperation display control method according to any one of the above when the computer program is executed.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a multi-machine teleoperation display control method as defined in any one of the above.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
according to the method and the device for displaying the working conditions of the cooperative robots in the working picture, after the robots disappear from the working picture, the state indication of the working conditions of the cooperative robots is still generated, so that operators can know the working conditions of the cooperative robots conveniently, the execution conditions of target tasks can be judged more intuitively, and teleoperation tasks can be completed better.
Drawings
For a clearer description of the solution in the present application, a brief description will be given below of the drawings that are needed in the description of the embodiments of the present application, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a schematic diagram of one embodiment of a collaborative robot moving from within the field of view of an image sensor to outside the field of view in the present application;
FIG. 3A is a schematic diagram of one embodiment of a work screen including a collaborative robot in the present application;
FIG. 3B is a schematic diagram of one embodiment of a work screen including status indications of a collaborative robot in the present application;
FIG. 3C is a schematic diagram of another embodiment of a work screen including status indications of a collaborative robot in the present application;
FIG. 4 is a flow diagram of one embodiment of a teleoperational display control method of the present application;
FIG. 5 is a flow chart of another embodiment of a teleoperational display control method of the present application;
FIG. 6 is a schematic diagram of one embodiment of a teleoperated display control device of the present application;
FIG. 7 is a schematic diagram of an embodiment of a computer device of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is an exemplary system architecture diagram of a teleoperational system of the present application.
Embodiments of the present application provide a teleoperational system 100, comprising: a remotely controlled robot 110, a collaborative robot 120, an image sensor 130, a display 140, and a controller 150.
The robot 110, the image sensor 130, and the display 140 are communicatively connected to the controller 150 by wired or wireless means, respectively.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
An image sensor 130 for collecting a work picture of the robot 110 to be remotely controlled; the work screen includes a collaborative robot and/or a status indication of the collaborative robot.
It should be noted that, in addition to the screen related to the condition of the robot being remotely controlled, the working screen may also include an indication of the state of the cooperative robot (e.g., when the cooperative robot in the motion state disappears from the working screen), which will be described in further detail later.
In an alternative embodiment, when the collaborative robot 120 is fixed, the view angle of the image sensor 130 may be adjusted in advance such that the portion of the collaborative robot 120 is included in the work screen captured by the image sensor 130. In another embodiment, when the cooperative robot 120 is in a moving state, the moving cooperative robot 120 may disappear from the work screen in some cases.
Specifically, the image sensor may be, but is not limited to: cameras, video cameras, scanners or other devices with associated functionality (cell phones, computers, etc.), etc. The working picture may be a two-dimensional image or a multi-dimensional image, etc.
The image sensor 130 may be fixed to the robot to be remotely controlled as needed to follow the movement of the robot to be remotely controlled; or fixed at a certain preset fixed position outside the robot to be remotely controlled. For easy understanding, the embodiment of the present application will be described in detail by taking an example that the image sensor 130 is fixed at a predetermined fixed position outside the robot 110, and the viewing angle of the image sensor 130 is adjusted as required in advance.
The robot 110 to be remotely controlled performs a motion based on an instruction of a motion instruction or the like generated by the controller.
The robot 110 to be remotely controlled may be, but is not limited to: humanoid robots, robotic arms, surgical/medical robots, service robots, and unmanned robots. The robot may refer to a whole robot, or may refer to a part of a robot based on teleoperation control, for example: the upper body of a humanoid robot or the paw portion of the robot. Taking a manipulator as an example, the gesture of the robot described in the following embodiments may refer to the gesture of all or part of the joints of the manipulator, for example: the pose of the robot tip, wherein in an alternative embodiment the robot tip may refer to the center of the flange at the output of the tip joint of the robot.
The cooperative robot 120 is configured to cooperate with the robot 110 to perform a target task.
Specifically, the cooperative robot may be another robot operated remotely; in addition, a non-teleoperated robot or the like that generates motion instructions based on a fixed program or a preset model is also possible.
And a display 140 for displaying a work screen to an operator.
In an alternative embodiment, the image sensor 130 captures a work screen of the robot being remotely controlled and sends the work screen to the controller 150 (e.g., a server); the controller 150 sends the work picture to the display 140 for display, either directly or after some processing of the work picture.
In particular, the display may be a display screen, or an AR/VR virtual display device.
In an alternative embodiment, the operator wears an AR virtual display device to display a virtual 3D work screen to the operator.
In an alternative embodiment, teleoperational system 100 also includes gesture sensor 160.
The attitude sensor 160 is communicatively connected to the controller 150 by wired or wireless means.
And the gesture sensor 160 is used for respectively acquiring the motion data of the key parts of the operator.
In particular, the attitude sensor may be various attitude sensors that may collect motion-related data, such as: IMUs, image sensors, etc. For easy understanding, as shown in fig. 1, the embodiment of the present application mainly uses an attitude sensor as an IMU as an example for detailed description.
The IMU is an inertial measurement unit, configured to measure motion data related to a target object, and includes: three-dimensional acceleration and three-dimensional rotation angle.
It should be noted that, the gesture sensor 160 may be directly fixed at a preset key position of the operator (as shown in fig. 1). The plurality of posture sensors may be provided in advance in a wearable device (such as an exoskeleton or a data glove), and the wearable device is worn on the body of the operator, so that the posture sensors are disposed at preset key parts of the operator (the drawings are omitted).
A controller 150 for executing the teleoperation display control method and the like described in the embodiments of the present application.
In an alternative embodiment, the controller is configured to obtain a work screen; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot; sending the working picture to a display for display; the working picture is an initial working picture or is obtained after the initial working picture is processed.
It should be noted that, the controller described in the embodiments of the present application may refer to a controller of the entire teleoperation system, or may be a controller of a remote controlled robot and/or a cooperative robot, a controller of a display, etc., and for convenience of understanding, the embodiments of the present application are collectively referred to as a controller. The controllers may be integrated or may be separately provided in the respective robots, displays, etc.
The teleoperation display control method provided by the embodiment of the application can be applied to a computer terminal (Personal Computer, PC); industrial control computer terminals (Industrial Personal Computer, IPC); a mobile terminal; a server; the system comprises a terminal and a server, and is realized through interaction between the terminal and the server; a programmable logic controller (Programmable Logic Controller, PLC); field programmable gate arrays (Field-Programmable Gate Array, FPGA); a Digital signal processor (Digital SignalProcesser, DSP) or a micro control unit (Microcontroller unit, MCU) or the like. The controller generates program instructions in accordance with a program fixed in advance in conjunction with data or the like acquired by the external IMU110 or the like. For specific limitations on the controller, reference may be made to the limitations of the teleoperational display control method in the following embodiments.
Specifically, the method can be applied to the computer device shown in fig. 7, and the computer device can be a terminal or a server. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a gesture recognition method for teleoperation. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a security check, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart stereo, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
It should be noted that, the method for controlling a teleoperation display provided in the embodiments of the present application is generally executed by the controller 150, and accordingly, the device for controlling a teleoperation display is generally disposed in the controller 150.
As shown in fig. 4, fig. 4 is a flow chart of one embodiment of a teleoperation display control method of the present application.
Step 210, obtaining a working picture of a robot to be remotely controlled; the working picture comprises the cooperative robot and/or a state indication of the cooperative robot.
As shown in fig. 3A, fig. 3A is a schematic diagram of an embodiment of the work screen including a collaborative robot in the present application. In an alternative embodiment, when the collaborative robot 120 is fixed, the view angle of the image sensor 130 may be adjusted in advance, and the controller acquires the image portion 210 of the collaborative robot directly included in the initial work screen acquired and transmitted through the image sensor 130 from the memory or the server according to the preset storage address, and then the initial work screen may be directly used as the work screen. The operator can directly learn about the operation of the collaborative robot through the collaborative robot image portion 210.
As shown in fig. 3B and 3C, fig. 3B is a schematic view of one embodiment in which the work screen includes status indications of the collaborative robot in the present application; fig. 3C is a schematic diagram of another embodiment in which the work screen includes status indications of the collaborative robot in the present application.
In another alternative embodiment, when the collaborative robot 120 is in a motion state, in some cases, the moving collaborative robot 120 may disappear from the work screen, and the controller acquires the work screen including the status indication of the collaborative robot from the memory or the server according to the preset storage address. The work screen including the status indication of the cooperative robot is a work screen obtained by the controller by performing some processes on the initial work screen acquired and transmitted by the image sensor 130, which will be described in further detail later.
Step 220 sends the work screen to a display for display.
In an alternative embodiment, before step 220, the method may further include:
step 230 converts the work screen into a work screen that is available for display by the virtual display device.
In an alternative embodiment, the controller facilitates the operator to more accurately perform the target task based on the work screen by converting the work screen into a virtual three-dimensional work screen that can be displayed by a virtual display device (e.g., VR or AR).
According to the method and the device for displaying the working conditions of the cooperative robots in the working picture, after the robots disappear from the working picture, the state indication of the working conditions of the cooperative robots is still generated, so that operators can know the working conditions of the cooperative robots conveniently, the execution conditions of target tasks can be judged more intuitively, and teleoperation tasks can be completed better. Such as: decisions can be made and actions taken more quickly; reducing errors caused by lack of information of the cooperative robot; the relative position, the motion track, the working state and the like among the robots can be better known by operators, so that the safety problems of collision, misoperation and the like among the robots are avoided.
In an alternative embodiment, the working screen may include a robot to be remotely controlled in addition to the cooperative robot. Such as: the image sensor 130 is fixed at a preset position outside the remotely controlled robot 110 such that both the remotely controlled robot 110 and the collaborative robot 120 are located within the field of view of the image sensor.
In another alternative embodiment, before the step 220 of sending the work screen to the display for display, the method may further include the steps of:
step 240 obtains status information of the remotely controlled robot.
Specifically, the state information of the robot to be remotely controlled may be, but is not limited to: the gesture information of the remotely controlled robot, the gesture information of the joints of the remotely controlled robot or the track information of the remotely controlled robot.
Step 250 generates a status indication of the remotely controlled robot in the work screen based on the coordinate conversion relationship between the image sensor and the remotely controlled robot.
In an alternative embodiment, the calibration of the coordinate conversion relationship between the image sensor 130 and the robot 110 to be remotely controlled may be performed in advance, and based on the calibration result, the controller may convert state information such as the pose of the robot 110 to be remotely controlled into a working screen to generate a state indication about the robot to be remotely controlled and the cooperative robot in the working screen.
By taking a remote control robot as an example of a manipulator, the gesture of the tail end of the manipulator can be obtained based on a kinematic equation, the gesture of the tail end of the manipulator can be converted into an image sensor coordinate system based on a calibration result, and then the display coordinate of the tail end of the manipulator in an image can be obtained based on the calibration parameters of the image sensor.
According to the embodiment of the application, the remote controlled robot or the state indication of the remote controlled robot is further displayed in the working picture, so that an operator can better know the cooperation condition of the remote controlled robot and the cooperation robot, and the multi-machine teleoperation control can be better completed.
As shown in fig. 5, fig. 5 is a flow chart of another embodiment of the teleoperation display control method of the present application.
In an alternative embodiment, when the cooperative robot is in motion, the cooperative robot may disappear from the initial working image in some cases, and before step 210 obtains the working image of the remotely controlled robot, the following steps may be further included:
step 260 obtains an initial work screen of the remotely controlled robot.
In an alternative embodiment, the controller retrieves the initial work picture or the initial work picture after some preprocessing, which is acquired and transmitted by the image sensor, from the memory or the server according to a preset memory address.
Step 270 determines whether there is a collaborative robot in the initial work screen.
In an alternative embodiment, the controller may identify the collaborative robot in the work screen based on conventional image processing or artificial intelligence, and if the collaborative robot cannot be identified, consider that the collaborative robot leaves the work screen. By the method, whether the cooperative robot exists in the working picture can be intuitively judged.
In another alternative embodiment, the controller may determine whether the collaborative robot is present in the work screen based on a field of view of the image sensor, etc., as will be described in further detail below.
Step 280, if the cooperative robot does not exist in the initial working picture, acquiring state information of the cooperative robot.
Specifically, the status information may be, but is not limited to: attitude information of the robot, attitude information of joints of the robot, and/or trajectory information of the robot.
For example, the humanoid robot moving by the cooperative robot is exemplified, and the state information may refer to posture information of the humanoid robot or movement track information of the robot.
In an alternative embodiment, the controller may acquire the state information of the collaborative robot acquired and transmitted by the position sensor. For example, the position information of the robot (i.e., the pose information of the robot) may be acquired based on a position sensor (e.g., a positioning device) mounted to the base of the robot.
In another alternative embodiment, taking a collaborative robot as an example, the state information of the robot may also refer to the pose information of the robot tip. Specifically, the controller may calculate the pose of the robot tip based on the robot kinematics equation based on the robot joint motion amount information acquired by the encoder.
Step 290 generates a status indication of the robot in the initial work screen based on the status information to obtain a work screen including the status indication of the collaborative robot.
Specifically, the status indication may be, but is not limited to: the robot motion points to an arrow or a mark corresponding to the image boundary, embodiments of which will be described in further detail later.
In an alternative embodiment, the following method steps may be included before step 210:
if the collaborative robot exists in the initial work screen, step 300 takes the initial work screen as the work screen.
According to the method and the device for displaying the working conditions of the cooperative robots in the working picture, after the robots disappear from the working picture, the state indication of the working conditions of the cooperative robots is still generated, so that operators can know the working conditions of the cooperative robots conveniently, the execution conditions of target tasks can be judged more intuitively, and teleoperation tasks can be completed better.
In another alternative embodiment, step 270 of determining whether the collaborative robot exists in the initial work screen may specifically include the following method steps:
step 271 obtains posture information of the cooperative robot.
The method for generating the gesture information of the cooperative robot may refer to the above embodiments, and will not be repeated here.
Step 272 obtains the field of view of the image sensor.
In an alternative embodiment, the controller may determine the coordinates of the field of view boundary of the image sensor in advance based on the position coordinates of the image sensor and calibration parameters of the image sensor itself.
Step 273 determines whether the pose of the collaborative robot is outside the field of view.
In an alternative embodiment, the controller may convert the pose of the cooperative robot in the robot coordinate system into the image sensor coordinate system, and then determine whether the pose of the robot in the image sensor coordinate system is located outside the preset field of view of the image sensor.
If step 274 is outside the field of view, it is considered that no cooperative robot exists in the initial work screen.
As shown in fig. 2, fig. 2 is a schematic view of one embodiment of the collaborative robot moving from within the field of view of the image sensor to outside the field of view in the present application. For example, when the view angle of the image sensor 130 is the right side of the corresponding object sensor 130, the boundary line 0 of the lens passing through the sensor is taken as a boundary, and when the collaborative robot moves to a certain gesture and is located on the left side of the boundary line 0 after being converted into the image sensor coordinate system, the collaborative robot 12 can be regarded as being located outside the view range of the image sensor 130.
According to the embodiment of the application, by the method, whether the cooperative robot leaves the visual field of the image sensor 130 or not can be judged more accurately under the condition that the cooperative robot is shielded, for example: the method can judge that the robot cannot be displayed on the working picture because the robot is blocked by the blocking object, and has wider applicability than the method for identifying the robot based on the image.
In an alternative embodiment, taking the collaborative robot 120 as a manipulator for example, step 280 of obtaining the state information of the collaborative robot may include the following method steps:
step 281 obtains joint motion information of the collaborative robot.
In an alternative embodiment, the controller may obtain joint movement amount information acquired and transmitted by a motion encoder located at a joint of the manipulator.
Step 282 obtains a pose of the end of the collaborative robot based on the articulation information.
In an alternative embodiment, the controller may calculate the pose of the manipulator end based on joint movement amount information in combination with a robot kinematics equation.
Step 283 uses the pose of the end of the cooperative robot as the state information of the cooperative robot.
According to the embodiment of the application, after the tail end of the robot disappears from the current working picture, the gesture information of the tail end of the robot can be obtained based on the method, and then the state indication of the tail end of the robot can be generated based on the gesture information.
In an alternative embodiment, step 290 of generating a status indication of the collaborative robot in the work screen based on the status information may specifically include the steps of:
step 291 obtains the current position of the collaborative robot in the current frame work.
Step 293 obtains a last position of the collaborative robot in a last frame of work.
Step 295 constructs an indication mark from the previous position to the current position in the current frame work picture.
Specifically, the above-mentioned indication mark may be, but is not limited to: arrow indication marks and connecting line indication marks.
For example, the arrow indication marks as shown in fig. 2 may be generated based on the method described in the above embodiments.
In the embodiment of the application, when the cooperative robot disappears from the working picture, the indication mark from the previous position to the current position is constructed in the current frame working picture, so that even if the cooperative robot disappears from the working picture, the state indication of the working condition of the cooperative robot can be obtained, and an operator can know the working condition of the cooperative robot conveniently.
In another alternative embodiment, step 290 of generating a status indication of the collaborative robot in the work screen based on the status information may specifically include the steps of:
step 292 obtains a current position of the collaborative robot in a current frame work screen.
Step 294 extracts the picture boundary closest to the current position in the current frame work picture.
In an alternative embodiment, a vertical line may be drawn toward the work screen at the current position, and then the intersection between the vertical line and the work screen is the "closest screen boundary to the current position" described in the above embodiment.
Step 296 marks the indication marks at the frame boundaries.
Specifically, the above-mentioned indication mark may be, but is not limited to: an arrow mark, a boundary point selection mark, or a boundary box selection mark of a preset direction, etc. may be displayed at the work screen boundary.
According to the method and the device for displaying the working conditions of the collaborative robot, when the collaborative robot disappears from the working picture, the controller extracts the picture boundary closest to the current position in the current frame working picture, and marks the indication mark on the picture boundary, so that even if the collaborative robot disappears from the working picture, the state indication of the working conditions of the collaborative robot can be obtained, and an operator can know the working conditions of the collaborative robot conveniently.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 6, as an implementation of the method shown in fig. 4, the present application provides an embodiment of a teleoperated display control device, which corresponds to the method embodiment shown in fig. 4, and which is particularly applicable to various controllers.
As shown in fig. 6, the teleoperation display control apparatus 400 of the present embodiment includes: an image acquisition module 410, and an image transmission module 420. Wherein:
an image acquisition module 410, configured to acquire a working image of a robot to be remotely controlled; the working picture comprises the cooperative robot and/or a state indication of the cooperative robot.
The image sending module 420 is configured to send the working picture to the display for display.
According to the method and the device for displaying the working conditions of the cooperative robots in the working picture, after the robots disappear from the working picture, the state indication of the working conditions of the cooperative robots is still generated, so that operators can know the working conditions of the cooperative robots conveniently, the execution conditions of target tasks can be judged more intuitively, and teleoperation tasks can be completed better.
In one embodiment, the work screen may also include a remotely controlled robot and/or a status indication of the remotely controlled robot.
In an alternative embodiment, teleoperated display control apparatus 400 may further include:
the state acquisition module is used for acquiring state information of the robot to be remotely controlled;
and the instruction generation module is used for generating a state instruction of the remote controlled robot in the working picture based on the state information and combining the coordinate conversion relation between the image sensor and the remote controlled robot.
According to the embodiment of the application, the remote controlled robot or the state indication of the remote controlled robot is further displayed in the working picture, so that an operator can better know the cooperation condition of the remote controlled robot and the cooperation robot, and the multi-machine teleoperation control can be better completed.
In an alternative embodiment, teleoperated display control apparatus 400 may further include:
the initial acquisition module is used for acquiring an initial working picture of the remote controlled robot;
the cooperation judging module is used for judging whether the cooperation robot exists in the initial working picture;
the indication determining module is used for acquiring state information of the cooperative robot if the cooperative robot does not exist in the initial working picture; generating a state indication of the collaborative robot in an initial work screen based on the state information to obtain a work screen including the state indication of the collaborative robot;
And the picture determining module is used for taking the initial working picture as the working picture comprising the cooperative robot if the cooperative robot exists in the initial working picture.
According to the method and the device for displaying the working conditions of the cooperative robots in the working picture, after the robots disappear from the working picture, the state indication of the working conditions of the cooperative robots is still generated, so that operators can know the working conditions of the cooperative robots conveniently, the execution conditions of target tasks can be judged more intuitively, and teleoperation tasks can be completed better.
In an alternative embodiment, the cooperation judging module may specifically include:
the cooperation judging sub-module is used for judging whether the initial working picture is recognized to obtain a cooperation robot or not;
and the image determination submodule is used for considering that the cooperative robot does not exist in the initial working picture if the cooperative robot is not identified.
The embodiment of the application can intuitively judge whether the cooperative robot leaves the working picture.
In another alternative embodiment, the cooperation judging module may specifically include:
the gesture solving sub-module is used for solving the gesture of the cooperative robot;
The visual field acquisition submodule is used for acquiring the visual field range of the image sensor;
the gesture judging sub-module is used for judging whether the gesture of the cooperative robot is positioned outside the visual field range;
and the image determination submodule is used for considering that the cooperative robot does not exist in the initial working picture if the robot leaves the visual field range.
According to the method and the device, whether the cooperative robot leaves the working picture can be judged more accurately under the condition that the robot is shielded, and applicability is wider.
In an alternative embodiment, the indication determining module may specifically include:
the information acquisition sub-module is used for acquiring joint motion information of the cooperative robot;
the gesture solving sub-module is used for solving the gesture of the tail end of the cooperative robot based on the joint motion information;
and the state determination submodule is used for taking the gesture of the tail end of the cooperative robot as state information of the cooperative robot.
According to the method, after the tail end of the robot disappears from the current working picture, the gesture information of the tail end of the robot can be obtained based on the method, and then the state indication of the tail end of the robot can be generated based on the gesture information.
In another alternative embodiment, the indication determining module may specifically include:
The position acquisition sub-module is used for acquiring the position information of the cooperative robot acquired and transmitted by the position sensor.
The position information of the cooperative robot can be directly acquired based on the position sensor.
In an alternative embodiment, the indication determining module may specifically further include:
the current acquisition sub-module is used for acquiring the current position of the cooperative robot in the current frame work picture;
the last acquisition sub-module is used for acquiring the last frame position of the cooperative robot in the last frame of work picture;
and the mark construction sub-module is used for constructing an indication mark from the last position to the current position in the current frame work picture.
In the embodiment of the application, when the cooperative robot disappears from the working picture, the indication mark from the previous position to the current position is constructed in the current frame working picture, so that even if the cooperative robot disappears from the working picture, the state indication of the working condition of the cooperative robot can be obtained, and an operator can know the working condition of the cooperative robot conveniently.
In another alternative embodiment, the indication determining module may specifically further include:
the current acquisition sub-module is used for acquiring the current position of the cooperative robot in the current frame work picture;
The boundary extraction sub-module is used for extracting a picture boundary closest to the current position in the current frame working picture;
and the indication marking sub-module is used for marking indication marks on the boundaries of the picture.
According to the embodiment of the application, when the cooperative robot disappears from the working picture, the controller extracts the picture boundary closest to the current position in the current frame working picture, and marks the indication mark on the picture boundary, so that even if the cooperative robot disappears from the working picture, the state indication of the working condition of the cooperative robot can be obtained, and an operator can know the working condition of the cooperative robot conveniently.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 7, fig. 7 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 6 comprises a memory 61, a processor 62, a network interface 63 communicatively connected to each other via a system bus. It is noted that only computer device 6 having components 61-63 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 61 includes at least one type of readable storage media including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 61 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. In other embodiments, the memory 61 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 6. Of course, the memory 61 may also comprise both an internal memory unit of the computer device 6 and an external memory device. In this embodiment, the memory 61 is typically used for storing an operating system and various application software installed on the computer device 6, such as program codes of a teleoperation display control method. Further, the memory 61 may be used to temporarily store various types of data that have been output or are to be output.
The processor 62 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 62 is typically used to control the overall operation of the computer device 6. In this embodiment, the processor 62 is configured to execute a program code stored in the memory 61 or process data, such as a program code for executing the teleoperation display control method.
The network interface 63 may comprise a wireless network interface or a wired network interface, which network interface 63 is typically used for establishing a communication connection between the computer device 6 and other electronic devices.
The present application also provides another embodiment, namely, a computer-readable storage medium storing a teleoperation display control program executable by at least one processor to cause the at least one processor to perform the steps of the teleoperation display control method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (10)

1. A multi-machine teleoperation display control method, the multi-machine including a robot to be remotely controlled and a cooperative robot outside the robot to be remotely controlled, the method comprising:
acquiring a working picture of a robot to be remotely controlled; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot;
And sending the working picture to a display for display.
2. The method for controlling a multi-machine teleoperation display according to claim 1, wherein the work screen further comprises a robot to be controlled; or (b)
Before the working picture is sent to a display for displaying, the method further comprises the following steps:
acquiring state information of a robot to be remotely controlled;
based on the state information, a state indication of the remote controlled robot is generated in the working picture in combination with a coordinate conversion relation between an image sensor and the remote controlled robot.
3. The method according to claim 1 or 2, characterized in that before the working picture of the robot to be remotely controlled is acquired while the cooperative robot is in motion, further comprising the steps of:
acquiring an initial working picture of a remote controlled robot;
judging whether a cooperative robot exists in the initial working picture;
if the initial working picture does not contain the cooperative robot, acquiring state information of the cooperative robot; generating a state indication of the cooperative robot in the initial work screen based on the state information to obtain the work screen including the state indication of the cooperative robot;
And if the cooperative robot exists in the initial working picture, taking the initial working picture as the working picture comprising the cooperative robot.
4. A multi-teleoperation display control method according to claim 3, wherein the judging whether the cooperative robot exists in the initial work screen comprises the steps of:
solving the gesture of the cooperative robot;
acquiring a visual field range of an image sensor;
judging whether the gesture of the cooperative robot is positioned outside the visual field range;
if the robot leaves the visual field range, the robot is regarded as not having a cooperative robot in the initial working picture; or (b)
Judging whether the initial working picture is recognized to obtain a cooperative robot or not;
and if the cooperative robot is not identified, the cooperative robot is not found in the initial working picture.
5. A multi-machine teleoperation display control method according to claim 3, characterized in that the obtaining of the state information of the cooperative robot comprises the steps of:
acquiring joint motion information of a cooperative robot; based on the joint motion information, calculating the gesture of the tail end of the cooperative robot; taking the gesture of the tail end of the cooperative robot as state information of the cooperative robot; or (b)
And acquiring the state information of the cooperative robot acquired and sent by the position sensor.
6. A multi-teleoperation display control method according to claim 3, characterized in that the generating a status indication of a collaborative robot in the work screen based on the status information comprises the steps of:
acquiring the current position of the cooperative robot in a current frame work picture;
acquiring a previous frame position of the cooperative robot in a previous frame work picture;
constructing an indication mark from the last position to the current position in the current frame work picture; or (b)
Acquiring the current position of the cooperative robot in a current frame work picture;
extracting a picture boundary closest to the current position in a current frame working picture;
and marking the picture boundary with an indication mark.
7. A multi-machine teleoperation display control device, comprising:
the image acquisition module is used for acquiring a working picture of the robot to be remotely controlled; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot;
and the image display module is used for sending the working picture to a display for display.
8. A teleoperational system, comprising: the robot comprises an image sensor, a robot to be remotely controlled, a cooperative robot, a display and a controller;
The image sensor is used for collecting an initial working picture of the remote control robot and sending the initial working picture to the controller;
the controller is used for acquiring a working picture; the working picture comprises a cooperative robot and/or a state indication of the cooperative robot; the working picture is sent to a display for display; the working picture is the initial working picture or the working picture is obtained after the initial working picture is processed.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the multi-teleoperational display control method of any one of claims 1 to 6 when the computer program is executed.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the multi-teleoperation display control method according to any one of claims 1 to 6.
CN202310302127.8A 2023-03-16 2023-03-16 Multi-machine teleoperation display control method, device and equipment Pending CN116277005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310302127.8A CN116277005A (en) 2023-03-16 2023-03-16 Multi-machine teleoperation display control method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310302127.8A CN116277005A (en) 2023-03-16 2023-03-16 Multi-machine teleoperation display control method, device and equipment

Publications (1)

Publication Number Publication Date
CN116277005A true CN116277005A (en) 2023-06-23

Family

ID=86834072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310302127.8A Pending CN116277005A (en) 2023-03-16 2023-03-16 Multi-machine teleoperation display control method, device and equipment

Country Status (1)

Country Link
CN (1) CN116277005A (en)

Similar Documents

Publication Publication Date Title
Krupke et al. Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction
Frank et al. Realizing mixed-reality environments with tablets for intuitive human-robot collaboration for object manipulation tasks
CN111694429A (en) Virtual object driving method and device, electronic equipment and readable storage
US10166673B2 (en) Portable apparatus for controlling robot and method thereof
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
TW201346640A (en) Image processing device, and computer program product
EP4187348A1 (en) Method and apparatus for movable robot to adjust pose of goods rack
CN112083801A (en) Gesture recognition system and method based on VR virtual office
CN108828996A (en) A kind of the mechanical arm remote control system and method for view-based access control model information
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
CN105319991A (en) Kinect visual information-based robot environment identification and operation control method
CN115847422A (en) Gesture recognition method, device and system for teleoperation
JP2015118442A (en) Information processor, information processing method, and program
CN113119104B (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
CN112416126B (en) Page scrolling control method and device, storage medium and electronic equipment
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN116277005A (en) Multi-machine teleoperation display control method, device and equipment
CN114074321A (en) Robot calibration method and device
CN114740854A (en) Robot obstacle avoidance control method and device
KR20230100101A (en) Robot control system and method for robot setting and robot control using the same
JP6885909B2 (en) Robot control device
CN116442218A (en) Teleoperation tracking method, device, equipment, system and storage medium
CN113758481A (en) Grid map generation method, device, system, storage medium and electronic equipment
CN111462341A (en) Augmented reality construction assisting method, device, terminal and medium
CN116901059A (en) Gesture sensor-based selection and solution method, device and system in teleoperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination