CN114074320B - Robot control method and device - Google Patents

Robot control method and device Download PDF

Info

Publication number
CN114074320B
CN114074320B CN202010796376.3A CN202010796376A CN114074320B CN 114074320 B CN114074320 B CN 114074320B CN 202010796376 A CN202010796376 A CN 202010796376A CN 114074320 B CN114074320 B CN 114074320B
Authority
CN
China
Prior art keywords
area
robot
environment
obstacle
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010796376.3A
Other languages
Chinese (zh)
Other versions
CN114074320A (en
Inventor
张禹
罗绍涵
沈毅
叶坤
李荟珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KUKA Robotics Guangdong Co Ltd
Original Assignee
KUKA Robotics Guangdong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KUKA Robotics Guangdong Co Ltd filed Critical KUKA Robotics Guangdong Co Ltd
Priority to CN202010796376.3A priority Critical patent/CN114074320B/en
Publication of CN114074320A publication Critical patent/CN114074320A/en
Application granted granted Critical
Publication of CN114074320B publication Critical patent/CN114074320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a robot control method and device. The method comprises the following steps: the robot control method includes the steps that whether a visual identification area of a visual sensor covers an environment area where the robot is located or not is determined based on the visual sensor arranged on a joint of the robot, if the visual identification area does not cover the environment area, the joint is adjusted until the visual identification area covers the environment area, after the visual identification area covers the environment area, environment data in the visual identification area are obtained, the robot is controlled based on the environment data, and due to the fact that the visual sensor is arranged on the joint of the robot body, the visual sensor cannot be shielded by movement of the robot body to generate a blind area.

Description

Robot control method and device
Technical Field
The application relates to the field of artificial intelligence, in particular to a robot control method and device.
Background
With the increasing demand of labor-intensive industry upgrading, industrial robots are more and more widely applied in the fields of production, manufacturing and the like. The industrial robot can keep running stably and reliably, can finish the production and manufacturing task according to the program that has set for in advance high-efficiently, when industrial robot operation, generally need acquire the surrounding environment to control the robot according to the surrounding environment.
The prior art method for acquiring the surrounding environment generally installs a camera in the surrounding environment, but the robot may move during working, and the surrounding environment may generate a blind area due to the movement of the robot, resulting in control errors.
Disclosure of Invention
The application aims to provide a robot control method and device, which can reduce blind areas in the surrounding environment to a certain extent, thereby reducing control errors to a certain extent.
According to an aspect of an embodiment of the present application, there is provided a method including: determining whether a visual recognition area of a visual sensor covers an environmental area where the robot is located, based on the visual sensor disposed on a joint of the robot; if the visual identification area does not cover the environment area, adjusting the joint until the visual identification area covers the environment area; after the visual recognition area covers the environment area, acquiring environment data in the visual recognition area; controlling the robot based on the environmental data.
According to an aspect of an embodiment of the present application, there is provided an apparatus, including: a determination module configured to determine whether a visual recognition area of a visual sensor provided on a joint of a robot covers an environmental area in which the robot is located, based on the visual sensor; an adjusting module configured to adjust the joint if the visual recognition area does not cover the environment area until the visual recognition area covers the environment area; an acquisition module configured to acquire environmental data in the visual recognition area after the visual recognition area covers the environmental area; a control module configured to control the robot based on the environmental data.
In some embodiments of the present application, based on the foregoing, the determining module is configured to: acquiring initial environment data in the visual identification area; determining whether the visual recognition area is occluded based on the initial environmental data; and determining whether the visual identification area covers the environment area where the robot is located or not based on whether the visual identification area is shielded or not.
In some embodiments of the present application, based on the above solution, the joint is plural, and the adjustment module is configured to: adjusting the height of the plurality of joints relative to the base of the robot until the visual recognition area covers the height direction of the environment area.
In some embodiments of the present application, based on the foregoing, the control module is configured to: identifying an obstacle within the environmental region based on the environmental data; acquiring an obstacle distance between the obstacle and the robot; and controlling the robot to avoid the obstacle in the movement process based on the obstacle distance.
In some embodiments of the present application, based on the foregoing solution, the control module is configured to: constructing an environment image of the environment area based on the environment data; acquiring difference pixels between the environment image and a preset image; based on the difference pixels, an obstacle within the environmental region is identified.
In some embodiments of the present application, based on the foregoing, the control module is configured to: the vision sensors are multiple, and an area image corresponding to each vision sensor is generated based on the environment data identified by each vision sensor in the vision sensors; and splicing a plurality of area images corresponding to the plurality of vision sensors to construct the environment image.
In some embodiments of the present application, based on the foregoing, the control module is configured to: acquiring position information and contour information of the obstacle; determining the obstacle distance based on the position information and the contour information.
In some embodiments of the present application, based on the foregoing, the environment area includes a safety area, an alarm area and a danger area, and the control module is configured to: determining that the obstacle is in the safe area, the warning area or the dangerous area based on the obstacle distance; if the barrier is in the safe area, controlling the robot to normally operate; if the obstacle is in the warning area, controlling the robot to run at a reduced speed, and giving an alarm corresponding to the warning area; and if the obstacle is in the dangerous area, controlling the robot to stop running and giving an alarm corresponding to the dangerous area.
In some embodiments of the present application, based on the foregoing, the control module is configured to: and controlling the running speed of the robot according to the obstacle distance between the obstacle and the robot, wherein the running speed and the obstacle distance form a positive correlation relationship.
According to an aspect of the embodiments of the present application, there is provided a computer-readable program medium storing computer program instructions, which, when executed by a computer, cause the computer to perform the method of any one of the above.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of the above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the technical scheme provided by some embodiments of the application, whether a visual identification area of a visual sensor covers an environment area where a robot is located is determined based on the visual sensor arranged on a joint of the robot, if the visual identification area does not cover the environment area, the joint is adjusted until the visual identification area covers the environment area, after the visual identification area covers the environment area, environment data in the visual identification area is obtained, the robot is controlled based on the environment data, because the visual sensor is arranged on the joint of the robot body, the visual sensor cannot be shielded by movement of the robot body to generate a blind area, and meanwhile, because the joint is movable, the position of the joint can be adjusted, the visual sensor is prevented from being shielded by movement of the joint of the robot, and further, all environment data in the environment area can be obtained, therefore, the robot can be better controlled according to all the obtained environment data, and control errors are reduced to a certain extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the solution of the embodiments of the present application can be applied;
fig. 2 schematically shows a schematic structural view of a robot to which the technical solution of the embodiment of the present application can be applied;
fig. 3 schematically shows a flow chart of a robot control method according to an embodiment of the application;
FIG. 4 schematically illustrates a highly partitioned schematic view of a vision perception system of the present application covering an area of an environment in which a robot is located;
FIG. 5 is a schematic diagram illustrating a division of a vision perception system of the present application to cover a safety area, a warning area and a danger area of an environment area where a robot is located;
FIG. 6 schematically illustrates a schematic view of a visual sensor viewing image of the present application;
FIG. 7 schematically illustrates a control system schematic of the present application;
FIG. 8 schematically illustrates a block diagram of a robot control device according to an embodiment of the present application;
FIG. 9 is a hardware diagram illustrating an electronic device according to an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which the technical solution of the embodiments of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a control device 101 (the terminal device may be one or more of a smartphone, a tablet, a laptop, a desktop computer, and a server), a transmission medium 102, and a robot 103. The transmission medium 102 is used to provide a communication link between the control device 101 and the robot 103. The transmission medium 102 may include various connection types, such as an electrical transmission medium, a Bluetooth transmission medium, a network medium, a wired communication medium, a wireless communication medium, and so forth.
It should be understood that the numbers of the control device 101, the transmission medium 102, and the robot 103 in fig. 1 are merely illustrative. There may be any number of control devices 101, transmission media 102, and robots 103, as desired for an implementation. For example, the control device 101 may be a server cluster composed of a plurality of servers.
Fig. 2 schematically shows a schematic structural diagram of a robot to which the technical solution of the embodiment of the present application can be applied.
As shown in fig. 2, the vision sensor 104 may be distributed over a plurality of joints of the robot 103, and a plurality of vision sensors 104 may be distributed over each joint.
In an embodiment of the application, the control device 101 determines whether a visual recognition area of the visual sensor 104 covers an environment area where the robot 103 is located based on the visual sensor 104 arranged on a joint of the robot 103, adjusts the joint until the visual recognition area covers the environment area if the visual recognition area does not cover the environment area, acquires environment data in the visual recognition area after the visual recognition area covers the environment area, and controls the robot 103 based on the environment data, because the visual sensor 104 is arranged on the joint of the robot 103 body, the visual sensor 104 cannot be shielded by movement of the robot 103 body to generate a blind area, and meanwhile, because the joint can move, the position of the joint can be adjusted, the situation that the visual sensor 104 is shielded by movement of the joint of the robot 103 itself is avoided, and further, all environment data in the environment area can be acquired, and therefore, the robot can be better controlled according to all the acquired environment data, and accordingly, control errors are reduced to a certain extent.
It should be noted that the robot control method provided in the embodiment of the present application is generally executed by the control apparatus 101, and accordingly, the robot control device is generally provided in the control apparatus 101. However, in other embodiments of the present application, the robot 103 may also have a similar function to the control apparatus 101, thereby performing the robot control method provided by the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 3 schematically shows a flowchart of a robot control method according to an embodiment of the present application, the execution subject of which may be the control device 101 shown in fig. 1, such as a server.
Referring to fig. 3, the robot control method at least includes steps S310 to S340, which are described in detail as follows:
in step S310, it is determined whether a visual recognition area of the visual sensor covers an environmental area in which the robot is located, based on the visual sensor provided on the robot joint.
In an embodiment of this application, the vision sensor can be embedded binocular 3D vision sensor, can be with the joint of embedded binocular 3D vision sensor embedding robot, compare in setting up the camera in the surrounding environment, set up the vision sensor at the robot can prevent that the robot from sheltering from the visual identification region to reduce control error to a certain extent.
In one embodiment of the present application, there may be a plurality of joints, each joint may be provided with a plurality of vision sensors, and the plurality of vision sensors may be symmetrically distributed on an outer surface of each joint, so that the plurality of vision sensors on each joint can fully cover an environmental area around each joint.
In one embodiment of the present application, the plurality of visual sensors on each joint may be evenly distributed along the edges of one cross-section of each joint. For example, if 3 vision sensors are arranged on a certain joint, in the cross section where the 3 vision sensors are located, the central angle between the radii where two adjacent vision sensors are located may be 120 degrees; if 6 vision sensors are arranged on a certain joint, in the cross section where the 6 vision sensors are located, the central angle between the radiuses where two adjacent vision sensors are located can be 60 degrees; if 2 visual sensors are arranged on a certain joint, in the cross section where the 2 visual sensors are located, the central angle between the radiuses where two adjacent visual sensors are located can be 180 degrees; if 4 vision sensors are arranged on a certain joint, in the cross section where the 4 vision sensors are located, the central angle between the radiuses where two adjacent vision sensors are located may be 90 degrees.
In one embodiment of the present application, the plurality of vision sensors on each joint may be distributed in different height directions of each joint, so that the vision recognition areas of the plurality of vision sensors can cover the whole height of the environment area where the robot is located, and the robot is controlled conveniently.
In one embodiment of the application, initial environment data in the visual recognition area may be acquired, whether the visual recognition area is occluded or not may be determined based on the initial environment data, and whether the visual recognition area covers an environment area where the robot is located may be determined based on whether the visual recognition area is occluded or not.
In one embodiment of the present application, the initial environment data may include an area, a height, whether there is a work task, a position where the work task is located, a type of the work task, whether there is an obstacle, a position where the obstacle is located, a type of the obstacle, a walkable route in the environment area, and the like of the environment area where the robot is located, which are identified by the visual recognition area of the visual sensor on the joint before the joint is adjusted.
In one embodiment of the present application, the environment area where the robot is located may be a workshop where the robot is located, a working area of the robot, an area within a safe distance from the working area of the robot, or other set areas.
In an embodiment of the application, an environment area image covered by a visual recognition area of the visual sensor before the joint is adjusted may be restored according to the initial environment data, and the restored environment area image is compared with a preset image to determine whether the visual recognition area of the visual sensor before the joint is adjusted is blocked.
In one embodiment of the application, the preset image may be an image of an area in which the robot is located.
With continued reference to fig. 3, in step S320, if the visual recognition area does not cover the environmental area, the joints are adjusted until the visual recognition area covers the environmental area.
In one embodiment of the application, if it is detected that the visual recognition area is occluded, the position of the joint is adjusted until the visual recognition area is not occluded.
In one embodiment of the application, the number of joints can be multiple, and the height of the joints relative to the base of the robot can be adjusted until the visual identification area covers the height direction of the environment area.
In one embodiment of the application, it may be determined how to adjust the joint so that the visual recognition area covers the environment area based on initial environment data in the visual recognition area before adjusting the joint.
In one embodiment of the present application, the caudal joint of the plurality of joints may be coupled to the machining tool, and the caudal joint may not be adjusted when the plurality of joints are adjusted to avoid affecting the machining of the machining tool.
In step S330, after the visual recognition area covers the environment area, environment data in the visual recognition area is acquired.
In one embodiment of the present application, the acquired environment data may include an area and a height of an environment area where the robot is located, whether there is a work task, a position where the work task is located, a type of the work task, whether there is an obstacle, a position where the obstacle is located, a type of the obstacle, a walkable route in the environment area, and the like.
In step S340, the robot is controlled based on the environmental data.
In one embodiment of the application, obstacles in an environment area can be identified based on environment data, an obstacle distance between the obstacle and the robot is obtained, and the robot is controlled to avoid the obstacle during movement based on the obstacle distance.
In an embodiment of the application, an environment image of an environment area may be constructed based on environment data, difference pixels between the environment image and a preset image are obtained, and an obstacle in the environment area is identified based on the difference pixels.
In one embodiment of the present application, after the difference pixels are acquired, the contour information of the article corresponding to the difference pixels may be acquired based on the difference pixels, and it may be determined based on the contour information that the article corresponding to the difference pixels is an obstacle or a processing object, so as to distinguish the obstacle from the processing object.
In one embodiment of the present application, after the difference pixels are obtained, position information of an article corresponding to the difference pixels may be obtained based on the difference pixels, and it is determined based on the position information that the article corresponding to the difference pixels is an obstacle or a processing object, so as to distinguish a workpiece needing to be processed from a spare workpiece placed in the storage area.
In one embodiment of the application, the preset image may be an image of an area in which the robot is located.
In an embodiment of the present application, there may be a plurality of visual sensors, and an area image corresponding to each visual sensor may be generated based on environment data identified by each visual sensor of the plurality of visual sensors, and the area images corresponding to the plurality of visual sensors may be stitched to construct the environment image.
In one embodiment of the application, the position information and the contour information of the obstacle can be acquired, and based on the position information and the contour information, the distance of the robot relative to the outer surface of the obstacle can be determined, so that the obstacle distance can be determined more accurately.
In one embodiment of the present application, the environment area may include a safety area, a warning area, and a danger area, and it may be determined that the obstacle is in the safety area, the warning area, or the danger area based on the obstacle distance; if the obstacle is in the safe area, controlling the robot to normally operate; if the obstacle is in the warning area, controlling the robot to run at a reduced speed, and giving an alarm corresponding to the warning area; if the obstacle is in the dangerous area, the robot is controlled to stop running and corresponding warning is carried out, compared with the situation that only the dangerous area is set for the robot, the robot is stopped emergently when the obstacle is detected in the dangerous area, the robot is decelerated and then stopped, and the robot can be prevented from being damaged due to the emergency stop.
In one embodiment of the application, the robot is located in a dangerous area, the warning area is located outside the dangerous area, the safe area is located outside the warning area, and as the obstacle approaches the robot from the safe area, the robot changes from normal operation into reduced-speed operation and gradually stops operating from reduced speed, so that the robot can be prevented from being damaged due to emergency stop, and the service life of the robot is prolonged.
In one embodiment of the application, the running speed of the robot can be controlled according to the barrier distance between the barrier and the robot, the running speed and the barrier distance form a positive correlation relationship, and the running speed of the robot is gradually reduced along with the approach of the barrier to the robot, so that the service life of the robot can be prolonged.
In one embodiment of the application, the obstacle may be an object, an animal or a human.
In one embodiment of the present application, the present application provides a robot control system comprising a vision perception system, a control processing system and an execution system, the vision perception system being composed of vision sensors; the control processing system consists of a computer and comprises an image splicing program, an object identification program, a distance calculation program and an instruction control program; the execution system comprises a robot motor control system and a warning light control system, and the robot control system is used for executing the robot control method.
In one embodiment of the application, taking a multi-joint robot as an example, a binocular vision sensor is mounted on a robot structure, the mounting position may include a base, each joint and a hand grip, and after appropriate position selection, coverage of the near range of the robot motion space is achieved. When the robot moves, the vision sensor moves along with the mechanical arm body, the sensor modules are matched with each other, shielding cannot be caused, and visual angle blind areas cannot occur.
In the embodiment, 3-6 groups of binocular embedded 3D vision sensors can be arranged on the base, are symmetrically distributed at intervals of 60-120 degrees, realize omnibearing surrounding environment observation, and feed back environmental data in a lower height range near a robot motion area to the control system in time.
In the embodiment, the first joint can be provided with 2-4 groups of binocular embedded 3D vision sensors which are separated by 90-180 degrees and symmetrically distributed at the front part and the rear part of the movement direction of the first joint, so that the observation of the surrounding environment without dead angles in the movement direction is realized, and the environment observation information in a medium height range near the movement area of the robot is timely fed back to the control system.
In the embodiment, 2-4 groups of binocular embedded 3D vision sensors can be arranged on the fourth joint, are separated by 90-180 degrees and are symmetrically distributed on two sides of the fourth joint, so that the observation of the environment without dead angles in the motion direction is realized, and the environment observation information in a higher height range near the motion area of the robot is fed back to the control system in time.
In this embodiment, the wrist structure in the sixth joint may be provided with 2-4 sets of binocular embedded 3D vision sensors, spaced 90 ° -180 ° apart, which are horizontally symmetrically distributed on both sides of the wrist structure. The dead-angle-free observation of the motion area of the execution end is realized, and the environment observation information on the motion path of the execution end is fed back to the control system in time.
In this embodiment, when the base, the first joint end, the fourth joint end, and the sixth joint end are in a static state, the visual perception system achieves comprehensive coverage around a robot motion area, as the heights of the visual sensors in the visual perception system relative to the base are different, as shown in fig. 4, the visual perception system can cover a lower height a area, a middle height B area, and a higher height C area of an environment area where the robot is located, and fig. 4 schematically shows a height division schematic diagram of the visual perception system covering the environment area where the robot is located, and transmits an ambient environment image and position information to the computer in real time.
In this embodiment, when the base, the first joint end, the fourth joint end and the sixth joint end are in a motion state, the corresponding vision sensors also perform corresponding motions, and the range of the visual angles of the vision sensors is not affected. The vision perception system realizes the comprehensive coverage near the moving area of the robot and transmits the images and the position information of the surrounding environment to the computer in real time.
In this embodiment, 3-6 sets of binocular embedded 3D vision sensors provided on a base of the robot, 4 sets of binocular embedded 3D vision sensors of a first joint, 2-4 sets of binocular embedded 3D vision sensors provided on a fourth joint, a camera frame rate of 2-4 sets of binocular embedded 3D vision sensors provided on a wrist structure in a sixth joint of the robot is 60-120FPS, a pixel size is 6.0X 6.0-18.0X 18.0 μm, a view angle is 120-180 DEG, a focal length is 2.95+/-5 mm-5.95+/-5 mm, and a weight is 30-60g.
The robot control method provided by the application can be applied to various robots, as shown in fig. 5 by way of example, the robots can achieve comprehensive coverage of a safe region, a warning region and a dangerous region through the robot control method provided by the application, accuracy of collected images and distance information is guaranteed, timely processing of emergency conditions is achieved, a robot body and personnel are protected, and stability of the system is guaranteed, and fig. 5 schematically shows a schematic diagram for dividing the safe region, the warning region and the dangerous region, wherein a vision perception system of the application covers an environment region where the robots are located.
The system comprises a plurality of modules of binocular vision sensors, a control processing system, an object recognition system, a distance calculation system, an instruction change system and an output result influence execution system.
In an embodiment of the present application, fig. 6 schematically shows a schematic diagram of a visual sensor observation image of the present application, as shown in fig. 6, after the present application is applied to an industrial robot body, a target object is observed simultaneously by using multiple sets of binocular 3D visual sensors 601, and the observed image is transmitted to a computer in real time. By utilizing a visual binocular three-dimensional imaging principle, a plurality of groups of binocular 3D visual sensors 601 cooperatively observe, images of an object are shot from different directions at the same time, and the spatial position and the model of the object are reconstructed through the characteristics of the images. The binocular 3D vision sensor 601 observes the environmental information in the visual field range, catches the approaching object, and feeds back the contour information and the spatial position information of the object to the control processing system in an image form. Each group of binocular 3D vision sensors 601 works simultaneously, transmits different contour and position information of the same object at different angles to the control processing system in real time, and the control processing system integrates the information, and performs splicing, identification and calculation.
In this embodiment, the working mode of cooperative observation of the multiple groups of binocular 3D vision sensors 601 realizes comprehensive coverage of the surrounding environment, improves the calibration accuracy, and ensures that the absolute positioning error is within an extremely low acceptable range.
In this embodiment, the binocular 3D vision sensor 601 may be the vision sensor 104 in fig. 2.
In one embodiment of the present application, fig. 7 schematically shows a control system schematic of the present application, and as shown in fig. 7, the basic steps are: a visual sensor composed of a plurality of groups of sensor modules such as a sensor module 1 … … sensor module n and the like acquires shape and position information of an image and transmits the shape and position information to a control processing system; an image splicing program of the control processing system splices the acquired images, and an object identification program judges whether the acquired images are human or other objects; the distance calculation program calculates and calculates the target distance, and judges whether the target is in a safe area, a warning area or a dangerous area; the command of the control processing system changes the robot motor and the warning light of the program control robot execution system.
In this embodiment, the system may gradually reduce the operating speed of the robot as the person approaches until a complete stop; as the personnel leave the monitoring area, the robot can automatically recover the running state; changing the monitoring area in real time along with the motion track of the robot; the robot is always kept in an enabled state without emergency braking, and the operation is recovered faster.
In this embodiment, the sensor module for omnibearing observation of the robot base, the sensor module for detecting the second joint and the fourth joint of the robot to the moving direction, and the sensor module for detecting the wrist of the robot to the working target detect the surrounding environment, transmit the image information and the position information to the control processing system in real time, and the control processing system performs image splicing and calculation. The stitching result shows that there is an object close to the robot workspace, and it is first identified that the close object is a person or other object. The recognition result shows that the approaching object is a person or other object. The distance calculation result shows that the object is in the safe area, the control processing system does not change the control instruction, the robot still keeps the original state to operate, the operating state indicator light shows green, and the detection of the surrounding environment is kept.
In the embodiment, the distance calculation result shows that the object reaches the range of the warning area, the control processing system changes the control instruction, the robot firstly reduces the running speed of the motor of the robot, then the running state indicator lamp is changed into yellow, and a yellow warning lamp or warning sound is emitted to remind people to enter the warning area, the binocular vision sensor keeps the detection state of the people, the running speed of the robot is slower as the people are closer to the robot, when the distance calculation result reaches the range of the dangerous area of the robot, the robot stops running completely, and the binocular vision sensor keeps detecting the surrounding environment.
In the embodiment, the distance calculation result shows that an object enters the range of the dangerous area, the control processing system changes the control instruction, the robot stops running, the running state lamp is changed into red at the moment, a red warning lamp or an alarm sound is emitted to remind a worker to leave the dangerous area as soon as possible, and the binocular vision sensor keeps detecting the surrounding environment.
In the embodiment, when the distance calculation result shows that the object leaves the dangerous area of the robot, the control processing system changes the control instruction, the operating state lamp is changed from the dangerous level to the warning level, the warning lamp is changed from red to yellow, the robot starts to slowly resume operation, and the binocular vision sensor keeps detecting the surrounding environment.
In the embodiment, when the distance calculation result shows that the object leaves the warning area of the robot, the control processing system changes the control instruction, the related running state lamp is changed into green, the warning lamp or the warning sound stops, the robot recovers to work at normal movement speed, and the binocular vision sensor keeps detecting the surrounding environment.
The above steps are repeated in a circulating way, the surrounding environment is continuously observed, and the data are fed back to the control system of the control processing system in real time. When a plurality of targets simultaneously appear, the object closest to the robot working area is used as a main monitoring target and fed back to the control processing system for instruction modification.
In the embodiment, the plurality of binocular 3D vision sensors are matched with one another, so that the visual angle near the robot working area is comprehensively covered, and the accuracy of the acquired image and distance information is ensured; and the system realizes timely processing of the emergency condition and ensures the stability of the system. The embedded binocular 3D vision perception system enables the robot to have high-level functions of recognizing objects, analyzing images, processing emergency situations and the like, non-contact detection of personnel can be achieved, along with approaching of the personnel, the system can warn and change the movement speed of the robot, and protection of the robot body and workers is achieved. The visual perception system greatly reduces the adverse effect of emergency stop on the production line, and facilitates the debugging and maintenance of the robot. The embedded binocular 3D vision perception technology can establish a virtual safe area, and the area design can achieve the effects of saving space, allowing personnel to enter and not needing emergency braking. The vision perception system divides the surrounding area of the robot into three areas of danger, warning and safety, can perform corresponding control feedback in different areas, helps the robot to always keep an enabling state, and can quickly recover normal operation. The binocular vision perception technology has the advantages of no need of contact, large perception range, rich perception information and the like, and helps the industrial robot to enhance the environment perception capability.
Embodiments of the apparatus of the present application are described below, which may be used to implement the robot control method of the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the robot control method described above in the present application.
Fig. 8 schematically shows a block diagram of a robot control device according to an embodiment of the application.
Referring to fig. 8, a robot controller 800 according to an embodiment of the present application includes a determination module 801, an adjustment module 802, an acquisition module 803, and a control module 804.
In some embodiments of the application, based on the foregoing solution, the determining module 801 is configured to determine whether a visual recognition area of a visual sensor covers an environmental area where the robot is located, based on the visual sensor disposed on a joint of the robot; the adjusting module 802 is configured to adjust the joint until the visual recognition area covers the environment area if the visual recognition area does not cover the environment area; the obtaining module 803 is configured to obtain the environmental data in the visual recognition area after the visual recognition area covers the environmental area; the control module 804 is configured to control the robot based on the environmental data.
In some embodiments of the present application, based on the foregoing scheme, the determining module 801 is configured to: acquiring initial environment data in a visual identification area; determining whether the visual recognition area is occluded based on the initial environmental data; and determining whether the visual recognition area covers the environment area where the robot is located based on whether the visual recognition area is blocked.
In some embodiments of the present application, based on the foregoing, the joints are multiple, and the adjustment module 802 is configured to: the height of the plurality of joints relative to the base of the robot is adjusted until the visual recognition area covers the height direction of the environment area.
In some embodiments of the present application, based on the foregoing solution, the control module 803 is configured to: identifying an obstacle within the environmental area based on the environmental data; acquiring the obstacle distance between an obstacle and the robot; and controlling the robot to avoid the obstacle in the movement process based on the obstacle distance.
In some embodiments of the present application, based on the foregoing, the control module 804 is configured to: constructing an environment image of the environment area based on the environment data; acquiring difference pixels between an environment image and a preset image; based on the difference pixels, an obstacle within the environmental region is identified.
In some embodiments of the present application, based on the foregoing, the control module 804 is configured to: the vision sensor is provided with a plurality of vision sensors, and an area image corresponding to each vision sensor is generated based on the environment data identified by each vision sensor in the vision sensors; and splicing a plurality of area images corresponding to the plurality of vision sensors to construct an environment image.
In some embodiments of the present application, based on the foregoing, the control module 804 is configured to: acquiring position information and contour information of an obstacle; based on the position information and the contour information, an obstacle distance is determined.
In some embodiments of the present application, based on the foregoing, the environment area includes a safety area, an alert area, and a danger area, and the control module 804 is configured to: determining that the obstacle is in a safe area, a warning area or a dangerous area based on the obstacle distance; if the obstacle is in the safe area, controlling the robot to normally operate; if the obstacle is in the warning area, controlling the robot to run at a reduced speed, and giving an alarm corresponding to the warning area; and if the obstacle is in the dangerous area, controlling the robot to stop running and giving an alarm corresponding to the dangerous area.
In some embodiments of the present application, based on the foregoing, the control module 804 is configured to: and controlling the running speed of the robot according to the obstacle distance between the obstacle and the robot, wherein the running speed and the obstacle distance form a positive correlation relationship.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic apparatus 90 according to this embodiment of the present application is described below with reference to fig. 9. The electronic device 90 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic device 90 is in the form of a general purpose computing device. The components of the electronic device 90 may include, but are not limited to: the at least one processing unit 91, the at least one memory unit 92, a bus 93 connecting different system components (including the memory unit 92 and the processing unit 91), and a display unit 94.
Wherein the storage unit stores program code executable by the processing unit 91 to cause the processing unit 91 to perform the steps according to various exemplary embodiments of the present application described in the section "example methods" above in this specification.
The storage unit 92 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 921 and/or a cache memory unit 922, and may further include a read only memory unit (ROM) 923.
Storage unit 92 may also include a program/utility 924 having a set (at least one) of program modules 925, such program modules 925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 93 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 90 may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 90, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 90 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 95. Also, the electronic device 90 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via a network adapter 96. As shown, the network adapter 96 communicates with the other modules of the electronic device 90 via the bus 93. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 90, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
According to an embodiment of the present application, there is also provided a computer-readable storage medium having a program product stored thereon, wherein the program product is capable of implementing the above-mentioned method of the present specification. In some possible embodiments, various aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present application described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
According to one embodiment of the present application, a program product for implementing the above method may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present application, and are not intended to be limiting. It will be readily appreciated that the processes illustrated in the above figures are not intended to indicate or limit the temporal order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A robot control method, comprising:
acquiring initial environment data in a visual recognition area of a visual sensor based on the visual sensor arranged on a joint of a robot;
determining whether the visual recognition area is occluded based on the initial environmental data;
determining whether the visual identification area covers an environment area where the robot is located based on whether the visual identification area is blocked;
if the visual identification area does not cover the environment area, adjusting the joint until the visual identification area covers the environment area;
after the visual recognition area covers the environment area, acquiring environment data in the visual recognition area;
controlling the robot based on the environmental data.
2. The robot control method according to claim 1, wherein the joint is plural, and the adjusting the joint until the visual recognition area covers an environment area in which the robot is located comprises:
adjusting the height of the plurality of joints relative to the base of the robot until the visual recognition area covers the height direction of the environment area.
3. The robot control method of claim 1, wherein the controlling the robot based on the environmental data comprises:
identifying an obstacle within the environmental region based on the environmental data;
acquiring an obstacle distance between the obstacle and the robot;
and controlling the robot to avoid the obstacle in the movement process based on the obstacle distance.
4. The robot control method of claim 3, wherein the identifying obstacles within the environmental area based on the environmental data comprises:
constructing an environment image of the environment area based on the environment data;
acquiring difference pixels between the environment image and a preset image;
based on the difference pixels, an obstacle within the environmental region is identified.
5. The robot control method of claim 4, wherein the constructing an environmental image of the environmental area based on the environmental data comprises:
the vision sensors are multiple, and area images corresponding to each vision sensor are generated based on the environment data identified by each vision sensor in the vision sensors;
and splicing a plurality of area images corresponding to the plurality of vision sensors to construct the environment image.
6. The robot control method according to claim 3, wherein the acquiring an obstacle distance between the obstacle and the robot includes:
acquiring position information and contour information of the obstacle;
determining the obstacle distance based on the position information and the contour information.
7. The robot control method of claim 3, wherein the environmental area includes a safety area, a warning area, and a danger area, and the controlling the robot to avoid the obstacle during the movement based on the obstacle distance includes:
determining that the obstacle is in the safe area, the warning area or the dangerous area based on the obstacle distance;
if the barrier is in the safe area, controlling the robot to normally operate;
if the obstacle is in the warning area, controlling the robot to run at a reduced speed, and giving an alarm corresponding to the warning area;
and if the barrier is in the dangerous area, controlling the robot to stop running and giving an alarm corresponding to the dangerous area.
8. The robot control method of claim 7, wherein controlling the robot to run at a reduced speed comprises:
and controlling the running speed of the robot according to the obstacle distance between the obstacle and the robot, wherein the running speed and the obstacle distance form a positive correlation relationship.
9. A robot control apparatus, comprising:
a determination module configured to acquire initial environment data in a visual recognition area of a visual sensor based on the visual sensor provided on a joint of a robot; determining whether the visual recognition area is occluded based on the initial environmental data; determining whether the visual identification area covers an environment area where the robot is located based on whether the visual identification area is blocked;
an adjusting module configured to adjust the joint if the visual recognition area does not cover the environment area until the visual recognition area covers the environment area;
an acquisition module configured to acquire environmental data in the visual recognition area after the visual recognition area covers the environmental area;
a control module configured to control the robot based on the environmental data.
CN202010796376.3A 2020-08-10 2020-08-10 Robot control method and device Active CN114074320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010796376.3A CN114074320B (en) 2020-08-10 2020-08-10 Robot control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010796376.3A CN114074320B (en) 2020-08-10 2020-08-10 Robot control method and device

Publications (2)

Publication Number Publication Date
CN114074320A CN114074320A (en) 2022-02-22
CN114074320B true CN114074320B (en) 2023-04-18

Family

ID=80279960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010796376.3A Active CN114074320B (en) 2020-08-10 2020-08-10 Robot control method and device

Country Status (1)

Country Link
CN (1) CN114074320B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006057439A (en) * 2005-03-28 2006-03-02 Enzan Kobo:Kk Boom positioning control method for construction machinery
CN104647390A (en) * 2015-02-11 2015-05-27 清华大学 Multi-camera combined initiative object tracking method for teleoperation of mechanical arm
CN104688351A (en) * 2015-02-28 2015-06-10 华南理工大学 Non-blocking positioning method for surgical instrument based on two binocular vision systems
CN109760062A (en) * 2019-03-12 2019-05-17 潍坊学院 A kind of picking robot control system
CN109910011A (en) * 2019-03-29 2019-06-21 齐鲁工业大学 A kind of mechanical arm barrier-avoiding method and mechanical arm based on multisensor
CN110116410A (en) * 2019-05-28 2019-08-13 中国科学院自动化研究所 Mechanical arm target guiding system, the method for view-based access control model servo
CN110228066A (en) * 2019-05-29 2019-09-13 常州中铁科技有限公司 Tunnel detector and its avoidance unit and barrier-avoiding method
CN110547875A (en) * 2018-06-01 2019-12-10 上海舍成医疗器械有限公司 method and device for adjusting object posture and application of device in automation equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006057439A (en) * 2005-03-28 2006-03-02 Enzan Kobo:Kk Boom positioning control method for construction machinery
CN104647390A (en) * 2015-02-11 2015-05-27 清华大学 Multi-camera combined initiative object tracking method for teleoperation of mechanical arm
CN104688351A (en) * 2015-02-28 2015-06-10 华南理工大学 Non-blocking positioning method for surgical instrument based on two binocular vision systems
CN110547875A (en) * 2018-06-01 2019-12-10 上海舍成医疗器械有限公司 method and device for adjusting object posture and application of device in automation equipment
CN109760062A (en) * 2019-03-12 2019-05-17 潍坊学院 A kind of picking robot control system
CN109910011A (en) * 2019-03-29 2019-06-21 齐鲁工业大学 A kind of mechanical arm barrier-avoiding method and mechanical arm based on multisensor
CN110116410A (en) * 2019-05-28 2019-08-13 中国科学院自动化研究所 Mechanical arm target guiding system, the method for view-based access control model servo
CN110228066A (en) * 2019-05-29 2019-09-13 常州中铁科技有限公司 Tunnel detector and its avoidance unit and barrier-avoiding method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
危险环境消防侦察机器人视轴规划算法研究;刘满禄等;《计算机工程与应用》(第23期);第223-228页 *

Also Published As

Publication number Publication date
CN114074320A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
US10994419B2 (en) Controlling a robot in the presence of a moving object
CN103419944B (en) Air bridge and automatic abutting method therefor
JP4396564B2 (en) Object monitoring method and motion tracker using the same
JP6545279B2 (en) Method and apparatus for monitoring a target trajectory to be followed by a vehicle as to whether a collision does not occur
JP2021500668A (en) Monitoring equipment, industrial equipment, monitoring methods and computer programs
CN111360818A (en) Mechanical arm control system through visual positioning
CN112706158B (en) Industrial man-machine interaction system and method based on vision and inertial navigation positioning
CN112066994B (en) Local autonomous navigation method and system for fire-fighting robot
US20200254610A1 (en) Industrial robot system and method for controlling an industrial robot
Tellaeche et al. Human robot interaction in industrial robotics. Examples from research centers to industry
CN108536142B (en) Industrial robot anti-collision early warning system and method based on digital grating projection
CN109202852A (en) A kind of intelligent inspection robot
CN114074320B (en) Robot control method and device
AU2020222504B2 (en) Situational awareness monitoring
JP2022548009A (en) object movement system
CN109917670B (en) Simultaneous positioning and mapping method for intelligent robot cluster
CN111736596A (en) Vehicle with gesture control function, gesture control method of vehicle, and storage medium
RU2685996C1 (en) Method and system for predictive avoidance of manipulator collision with human being
JP6885909B2 (en) Robot control device
CN113064425A (en) AGV equipment and navigation control method thereof
CN105892454A (en) Mobile intelligent security check robot
Suzuki et al. A vision system with wide field of view and collision alarms for teleoperation of mobile robots
CN115188091B (en) Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment
Sukumar et al. Augmented reality-based tele-robotic system architecture for on-site construction
KR101958247B1 (en) 3D Virtual Image Analysis System using Image Separation Image Tracking Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant