CN111687829A - Anti-collision control method, device, medium and terminal based on depth vision - Google Patents

Anti-collision control method, device, medium and terminal based on depth vision Download PDF

Info

Publication number
CN111687829A
CN111687829A CN201910193711.8A CN201910193711A CN111687829A CN 111687829 A CN111687829 A CN 111687829A CN 201910193711 A CN201910193711 A CN 201910193711A CN 111687829 A CN111687829 A CN 111687829A
Authority
CN
China
Prior art keywords
target object
collision
operable
mobile device
collision safety
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910193711.8A
Other languages
Chinese (zh)
Other versions
CN111687829B (en
Inventor
吴俊伟
何雪萦
梁志远
蔡伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Chuangshi Intelligent Technology Co ltd
Original Assignee
Suzhou Chuangshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Chuangshi Intelligent Technology Co ltd filed Critical Suzhou Chuangshi Intelligent Technology Co ltd
Priority to CN201910193711.8A priority Critical patent/CN111687829B/en
Publication of CN111687829A publication Critical patent/CN111687829A/en
Application granted granted Critical
Publication of CN111687829B publication Critical patent/CN111687829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The application provides an anti-collision control method, an anti-collision control device, an anti-collision control medium and a terminal based on depth vision, wherein the method comprises the following steps: acquiring three-dimensional data of a target object and an operation part of an operable mobile device; determining a first anti-collision safety zone of the target object and a second anti-collision safety zone of an operating part of the operable mobile device according to the three-dimensional data; a boundary around the target object is established within an overlapping area of the first anti-collision safety area and the second anti-collision safety area, for adjusting an operation part thereof to a posture facing the target object when the operable moving device moves to the boundary toward a direction approaching the target object. According to the method and the device, the target object and the safe area of the mechanical arm are determined, the advancing speed of the mechanical arm is controlled, a boundary line for the mechanical arm to adjust the gripping posture of the mechanical arm is created, the mechanical arm can move towards the target object in a posture that the palm of the mechanical arm is over against the target object, and therefore the problem of collision with the target object is effectively solved.

Description

Anti-collision control method, device, medium and terminal based on depth vision
Technical Field
The present application relates to the field of control based on a vision system, and in particular, to an anti-collision control method, apparatus, medium, and terminal based on depth vision.
Background
A robot is an automatic manipulator that simulates some of the motion functions of a human hand and arm to grasp, transport objects or manipulate tools according to a fixed program. The robot has the characteristics that various expected operations can be completed through programming, and the advantages of the robot and the manipulator are combined in structure and performance.
However, in the prior art, the robot arm often collides with a target object during the process of performing tasks such as grasping and carrying objects or operating tools.
Content of application
In view of the above-mentioned shortcomings of the prior art, the present application aims to provide a method, an apparatus, a medium, and a terminal for anti-collision control based on depth vision, which are used to solve the problem that the existing manipulator often collides with a target object during the task of grabbing, transporting an object or operating a tool.
To achieve the above and other related objects, a first aspect of the present application provides a depth vision-based collision avoidance control method, including: acquiring three-dimensional data of a target object and an operation part of an operable mobile device; determining a first anti-collision safety zone of the target object and a second anti-collision safety zone of an operating part of the operable mobile device according to the three-dimensional data; establishing a boundary around the target object within an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable moving device moves to the boundary in a direction approaching the target object, the operating part of the operable moving device is adjusted to a posture facing the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
In some embodiments of the first aspect of the present application, the method comprises: reducing a rate of movement of the operable mobile device and/or increasing a depth vision sampling frequency when the operable mobile device is moved towards a direction of approaching a target object to a position field where the first and second anti-collision safety zones are tangent.
In some embodiments of the first aspect of the present application, the determining the first anti-collision safety area includes: a first anti-collision safety zone is determined centered on a geometric center of the target object.
In some embodiments of the first aspect of the present application, the type of the operable moving means comprises a mobile robotic arm; the mobile robotic arm includes a manipulator grip for grasping a target object.
In some embodiments of the first aspect of the present application, the determining the second anti-collision safety area includes: and determining a second anti-collision safety area by taking the palm of the manipulator as the center.
In some embodiments of the first aspect of the present application, when the mobile robotic arm moves to the boundary line toward the target object, the gripper of the mobile robotic arm is adjusted to a posture in which the palm of the hand faces the target object.
To achieve the above and other related objects, a second aspect of the present application provides an operable mobile device, comprising: a depth vision module for acquiring three-dimensional data of a target object and an operation part of an operable mobile device; a control module for determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operating part of the operable moving device based on three-dimensional data of the target object and the operable moving device, and for establishing a boundary around the target object within an overlapping area of the first anti-collision safety area and the second anti-collision safety area for the operable moving device to adjust its operating part to a posture facing the target object when moving to the boundary in a direction approaching the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
In some embodiments of the second aspect of the present application, the depth vision module comprises a depth camera; the depth camera performs object detection based on the point features to acquire three-dimensional data of a target object and an operation portion of an operable mobile device.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the depth vision-based collision avoidance control method.
To achieve the above and other related objects, the present application provides an electronic terminal including: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the terminal to execute the depth vision-based anti-collision control method.
As described above, the anti-collision control method, device, medium, and terminal based on depth vision according to the present application have the following beneficial effects: according to the method and the device, the target object and the safe area of the mechanical arm are determined, the advancing speed of the mechanical arm is controlled, and the boundary line for the mechanical arm to adjust the gripping posture of the mechanical arm is created, so that the mechanical arm can move towards the target object in the posture that the palm of the mechanical arm is over against the target object, and the problem of collision with the target object is effectively solved.
Drawings
Fig. 1a is a schematic diagram illustrating an application scenario of an operable mobile device according to an embodiment of the present application.
Fig. 1b is a schematic diagram illustrating an application scenario of an operable mobile device according to an embodiment of the present application.
Fig. 1c is a schematic diagram illustrating an application scenario of an operable mobile device according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a method for controlling collision avoidance based on depth vision according to an embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for controlling collision avoidance based on depth vision according to an embodiment of the present application.
Fig. 4 is a block diagram of an operable mobile device according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic terminal according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
In this application, unless expressly stated or limited otherwise, the terms "mounted," "connected," "secured," "retained," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
A robot is an automatic manipulator that simulates some of the motion functions of a human hand and arm to grasp, transport objects or manipulate tools according to a fixed program. The robot has the characteristics that various expected operations can be completed through programming, and the advantages of the robot and the manipulator are combined in structure and performance. However, the conventional robot has a problem that it often collides with a target object during a task of grasping and carrying an object or an operation tool.
In view of the above-described problems occurring in the prior art, the present general inventive concept is directed to providing a depth vision-based collision avoidance control method, apparatus, medium, and terminal for determining a target object and a safe region of a robot arm, and controlling a travel speed of the robot arm, creating a boundary for the robot arm to adjust a gripping posture of the robot arm, so that the robot arm can move toward the target object in a posture in which a palm of the robot arm is facing the target object, thereby effectively solving a problem of collision with the target object.
For the understanding of those skilled in the art, the technical solution of the present application will now be described in detail with reference to fig. 1a to 1 c. As shown in fig. 1a, there is shown an application scenario of the present application in an embodiment, in which the target object 11 is a beverage bottle placed on a table, and the operable moving device 12 includes a robot arm 13 and a robot gripper 14. The robot gripper 14 is attached to an end of the robot arm 13, and its posture can be adjusted to be more suitable for gripping the target object 11. The mobile device 12 is also equipped with a depth camera, not shown, inside for acquiring three-dimensional data with depth information of a subject.
In some embodiments of the present application, as shown in fig. 1b, a first anti-collision safety zone 15 is established centered on the geometric center of the target object 11, and a second anti-collision safety zone 16 is established centered on the palm of the robot gripper 14. As can be seen from fig. 1a and 1b, when the first anti-collision safety area 15 and the second anti-collision safety area 16 are not overlapped, the operable moving device 12 may be rapidly moved toward the target object 11. Whereas when the first and second anti-collision safety zones 15, 16 are tangent, the operable moving device 12 reduces its rate of movement and/or raises the sampling frequency of the depth camera.
In some embodiments of the present application, as shown in FIG. 1c, a boundary 17 is established around the target object 11. As will be understood from fig. 1a to 1c, when the operable moving device 12 continues to move toward the target object 11 and reaches around the boundary line 17, the gripper 14 is adjusted to a posture in which the palm of the hand is facing the target object 11 and slowly moves toward the target object 11, so that the target object 11 can be smoothly grasped without collision.
In the above, the application scenario of the present application in one embodiment is described in detail. Hereinafter, the technical solution of the present application will be further explained in conjunction with a depth vision-based anti-collision control method.
Fig. 2 is a schematic flow chart illustrating a depth vision-based anti-collision control method according to an embodiment of the present application.
In some embodiments, the collision avoidance control method may be applied to a controller, for example: an ARM controller, an FPGA controller, an SoC controller, a DSP controller, or an MCU controller, etc. In some embodiments, the collision avoidance control method is also applicable to computers that include components such as memory, memory controllers, one or more processing units (CPUs), peripheral interfaces, RF circuitry, audio circuitry, speakers, microphones, input/output (I/O) subsystems, display screens, other output or control devices, and external ports; the computer includes, but is not limited to, Personal computers such as desktop computers, notebook computers, tablet computers, smart phones, smart televisions, Personal Digital Assistants (PDAs), and the like. In other embodiments, the collision avoidance control method may be applied to a server, which may be arranged on one or more physical servers according to various factors such as functions, loads, and the like, or may be formed by a distributed or centralized server cluster.
In this embodiment, the depth vision-based anti-collision control method includes steps S21, S22, and S23.
In step S21, three-dimensional data of the target object and the operation portion of the operable mobile device is acquired.
In some embodiments of the present application, three-dimensional data of a target object and an operating part of an operable mobile device is acquired using a depth camera. The depth camera can obtain depth information of a photographic subject, i.e., three-dimensional position and size information, in addition to a planar image of the photographic subject, thereby obtaining three-dimensional data of the target subject and the surrounding environment of the operable mobile device.
Optionally, the depth camera performs object detection based on the point features to acquire three-dimensional data of the target object and the operation part of the operable mobile device. The point-based feature means that the point feature is utilized to carry out operations such as image registration and matching, target description and identification, light beam calculation, moving target tracking and identification, and 3D modeling of a three-dimensional image.
The object detection is performed based on the point features, hundreds of point features can be detected in the image generally, and the object detection method based on the point features has better robustness because the point features belong to local features. In addition, the object detection mode based on the point features also has better identification performance, and points on different objects are easy to distinguish.
In step S22, a first anti-collision safety region of the target object and a second anti-collision safety region of the operation section of the operable moving device are determined based on the three-dimensional data.
In some embodiments of the present application, a first collision avoidance safety zone is determined centered around a geometric center of the target object. It should be noted that the first anti-collision safety area may be an area with a shape similar to the geometric shape of the target object, and may also be a circular area, a triangular area, a square area, an irregular-shaped area, or the like, which is not limited in this application.
In some embodiments of the present application, the second anti-collision safety region is determined centering on the palm of the manipulator. It should be noted that the second anti-collision safety area may be an area with a shape similar to the geometric shape of the robot grip, and may also be a circular area, a triangular area, a square area, an irregular-shaped area, or the like, which is not limited in this application.
In step S23, a boundary line is established around the target object within an overlapping area of the first anti-collision safety area and the second anti-collision safety area, for the operable moving device to adjust its operating part to a posture facing the target object when moving to the boundary line in a direction approaching the target object. The overlapping region described in this application includes a case where the first anti-collision safety region and the second anti-collision safety region intersect, and also includes a case where the first anti-collision safety region and the second anti-collision safety region are tangent.
The operable moving device moves to the boundary line in the direction of approaching the target object and approaches but does not touch the target object, so that the operable moving device can adjust the posture of the gripper in time to be opposite to the target object before touching the target object, and ensure that the gripper touches the target object in the correct posture, thereby smoothly performing operation tasks such as grabbing, pressing, or carrying.
As shown in fig. 3, a flowchart of a depth vision-based anti-collision control method in another embodiment of the present application is shown. In this embodiment, the anti-collision control method includes steps S31, S32, S33, and S34.
In step S31, three-dimensional data of the target object and the operation portion of the operable mobile device is acquired.
In step S32, a first anti-collision safety region of the target object and a second anti-collision safety region of the operation section of the operable moving device are determined based on the three-dimensional data.
Step S31 and step S32 in this embodiment are similar to step S21 and step S22 in the previous embodiment, and therefore are not described again.
In step S33, when the operable mobile device moves towards approaching a target object to a position field where the first and second anti-collision safety zones are tangent, the rate of movement of the operable mobile device is reduced and/or the depth vision sampling frequency is increased.
In the event that the first and second anti-collision safety zones are separated, the operable moving device typically moves at a higher rate, thereby increasing the efficiency with which the operable moving device performs tasks. However, as the operable mobile device gets closer to the target object, the probability of collision with the target object also gets higher.
Thus, the present embodiment sets a region for adjusting the rate of movement and/or the sampling frequency of the depth camera. That is, when the operable moving device moves to a position region where the first anti-collision safety region and the second anti-collision safety region are tangent to each other in a direction approaching the target object, the operable moving device decreases its moving speed to move the robot arm toward the target object at a lower speed, or the operable moving device increases the sampling frequency of the depth camera to increase the efficiency of image recognition, or both decreases its moving speed and increases the sampling frequency of the depth camera, which is not limited in this application.
In step S34, a boundary line is established around the target object within an overlapping area of the first anti-collision safety area and the second anti-collision safety area, for the operable moving device to adjust its operating part to a posture facing the target object when moving to the boundary line in a direction approaching the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
It should be noted that the overlapping ranges described in the present application include a case where the first anti-collision safety region and the second anti-collision safety region intersect, and also include a case where the first anti-collision safety region and the second anti-collision safety region are tangent.
In a case where the boundary is located in an intersection region of the first anti-collision safety region and the second anti-collision safety region, the operable moving means reduces a moving rate and/or increases a sampling frequency of the depth camera when moving to a position region where the two safety regions are tangent toward a direction approaching the target object, and adjusts a posture of the robot hand grip so that the palm of the hand is directed toward the target object while continuing to move to the boundary toward the direction approaching the target object.
In the case where the boundary line is located in a tangent region of the first anti-collision safety region and the second anti-collision safety region, the operable moving device reduces the moving rate and/or increases the sampling frequency of the depth camera when moving to a position region where the two safety regions are tangent toward the target object, and adjusts the posture of the robot grip so that the palm of the hand is directed toward the target object.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Fig. 4 is a block diagram of an operable mobile device according to an embodiment of the present application. The operable mobile device comprises: a depth vision module 41 and a control module 42.
The depth vision module 41 is used to acquire three-dimensional data of a target object and an operation part of the operable mobile device. The control module 42 is used for determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operating part of the operable mobile device according to the three-dimensional data of the target object and the operable mobile device, and is also used for establishing a boundary line around the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that the operable mobile device can adjust the operating part to be in a posture facing the target object when moving to the boundary line towards the direction close to the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
It should be noted that the operable mobile device provided in this embodiment is similar to the implementation of the depth vision-based anti-collision control method provided in the foregoing, and therefore, the details are not repeated. It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the control module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the function of the control module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 5 is a schematic structural diagram of another electronic terminal according to an embodiment of the present application. This example provides an electronic terminal, includes: a processor 51, a memory 52, a transceiver 53, a communication interface 54, and a system bus 55; the memory 52 and the communication interface 54 are connected to the processor 51 and the transceiver 53 through the system bus 55 and perform communication with each other, the memory 52 is used for storing computer programs, the communication interface 54 and the transceiver 53 are used for communicating with other devices, and the processor 51 is used for running the computer programs to enable the electronic terminal to execute the steps of the deep vision based anti-collision control method.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In summary, the present application provides an anti-collision control method, apparatus, medium, and terminal based on depth vision, which can determine a target object and a safe region of a robot arm, control a traveling speed of the robot arm, and create a boundary for the robot arm to adjust a gripping posture of the robot arm, so that the robot arm can move toward the target object in a posture in which a palm of the robot arm is facing the target object, thereby effectively solving a problem of collision with the target object. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. A collision prevention control method based on depth vision is characterized by comprising the following steps:
acquiring three-dimensional data of a target object and an operation part of an operable mobile device;
determining a first anti-collision safety zone of the target object and a second anti-collision safety zone of an operating part of the operable mobile device according to the three-dimensional data;
establishing a boundary around the target object within an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable moving device moves to the boundary in a direction approaching the target object, the operating part of the operable moving device is adjusted to a posture facing the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
2. The depth vision based collision avoidance control method of claim 1, wherein the method comprises:
reducing a rate of movement of the operable mobile device and/or increasing a depth vision sampling frequency when the operable mobile device is moved towards a direction of approaching a target object to a position field where the first and second anti-collision safety zones are tangent.
3. The depth vision-based collision avoidance control method according to claim 1, wherein the first collision avoidance safety region is determined by: a first anti-collision safety zone is determined centered on a geometric center of the target object.
4. The depth vision based collision avoidance control method of claim 1, wherein the type of the operable mobile device comprises a mobile robotic arm; the mobile robotic arm includes a manipulator grip for grasping a target object.
5. The depth vision-based collision avoidance control method according to claim 4, wherein the second collision avoidance safety region is determined by: and determining a second anti-collision safety area by taking the palm of the manipulator as the center.
6. The depth vision-based anti-collision control method according to claim 4, wherein when the mobile robot arm moves to the boundary line in a direction approaching the target object, the robot arm holds the robot arm in a posture in which the palm of the hand faces the target object.
7. An operable mobile device, comprising:
a depth vision module for acquiring three-dimensional data of a target object and an operation part of an operable mobile device;
a control module, configured to determine a first anti-collision safety zone of the target object and a second anti-collision safety zone of an operating part of the operable moving device according to the three-dimensional data of the target object and the operable moving device, and further configured to establish a boundary around the target object within an overlapping area of the first anti-collision safety zone and the second anti-collision safety zone, so that the operable moving device adjusts an operating part of the operable moving device to a posture facing the target object when moving to the boundary in a direction approaching the target object; wherein the overlapping region comprises an area of the operable mobile device that is close to but does not touch a target object.
8. The operable mobile device of claim 7, wherein the depth vision module comprises a depth camera; the depth camera performs object detection based on the point features to acquire three-dimensional data of a target object and an operation portion of an operable mobile device.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the depth vision-based collision avoidance control method according to any one of claims 1 to 6.
10. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the memory-stored computer program to cause the terminal to perform the depth vision-based collision avoidance control method of any one of claims 1 to 6.
CN201910193711.8A 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision Active CN111687829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910193711.8A CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910193711.8A CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Publications (2)

Publication Number Publication Date
CN111687829A true CN111687829A (en) 2020-09-22
CN111687829B CN111687829B (en) 2023-10-20

Family

ID=72474478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910193711.8A Active CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Country Status (1)

Country Link
CN (1) CN111687829B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117681214A (en) * 2024-02-04 2024-03-12 泓浒(苏州)半导体科技有限公司 Wafer transfer-based multi-mechanical arm collision early warning method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559711A (en) * 2013-11-05 2014-02-05 余洪山 Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
CN104870147A (en) * 2012-08-31 2015-08-26 睿信科机器人有限公司 Systems and methods for safe robot operation
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN107932560A (en) * 2017-11-14 2018-04-20 上海交通大学 A kind of man-machine safety guard system and means of defence
US20180141213A1 (en) * 2016-11-24 2018-05-24 Institute For Information Industry Anti-collision system and anti-collision method
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN108858251A (en) * 2018-08-30 2018-11-23 东北大学 A kind of collision avoidance system of high-speed motion manipulator

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104870147A (en) * 2012-08-31 2015-08-26 睿信科机器人有限公司 Systems and methods for safe robot operation
CN103559711A (en) * 2013-11-05 2014-02-05 余洪山 Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
US20180141213A1 (en) * 2016-11-24 2018-05-24 Institute For Information Industry Anti-collision system and anti-collision method
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN107932560A (en) * 2017-11-14 2018-04-20 上海交通大学 A kind of man-machine safety guard system and means of defence
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN108858251A (en) * 2018-08-30 2018-11-23 东北大学 A kind of collision avoidance system of high-speed motion manipulator

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117681214A (en) * 2024-02-04 2024-03-12 泓浒(苏州)半导体科技有限公司 Wafer transfer-based multi-mechanical arm collision early warning method and system
CN117681214B (en) * 2024-02-04 2024-04-12 泓浒(苏州)半导体科技有限公司 Wafer transfer-based multi-mechanical arm collision early warning method and system

Also Published As

Publication number Publication date
CN111687829B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
Fujita et al. What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics
CN112060087B (en) Point cloud collision detection method for robot to grab scene
CN111144426B (en) Sorting method, sorting device, sorting equipment and storage medium
Wang et al. Robot manipulator self-identification for surrounding obstacle detection
CN107138432B (en) Method and apparatus for sorting non-rigid objects
Sayour et al. Autonomous robotic manipulation: Real-time, deep-learning approach for grasping of unknown objects
Kaipa et al. Addressing perception uncertainty induced failure modes in robotic bin-picking
Sanz et al. Vision-guided grasping of unknown objects for service robots
US11833692B2 (en) Method and device for controlling arm of robot
CN108801255B (en) Method, device and system for avoiding robot collision
JP2014161965A (en) Article takeout device
TWI748409B (en) Data processing method, processor, electronic device and computer readable medium
CN112828892B (en) Workpiece grabbing method and device, computer equipment and storage medium
JP2021066011A5 (en)
Prezas et al. AI-enhanced vision system for dispensing process monitoring and quality control in manufacturing of large parts
CN111687829B (en) Anti-collision control method, device, medium and terminal based on depth vision
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
WO2022173468A1 (en) Extensible underconstrained robotic motion planning
Teke et al. Real-time and robust collaborative robot motion control with Microsoft Kinect® v2
Sahu et al. Shape features for image-based servo-control using image moments
CN115284279A (en) Mechanical arm grabbing method and device based on aliasing workpiece and readable medium
Bhuyan et al. Structure‐aware multiple salient region detection and localization for autonomous robotic manipulation
CN117226854B (en) Method and device for executing clamping task, storage medium and electronic equipment
CN117961912A (en) Mechanical arm control method and device, electronic equipment and medium
Kozyr et al. Algorithm for Determining Target Point of Manipulator for Grasping an Object Using Combined Sensing Means

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant