CN111687829B - Anti-collision control method, device, medium and terminal based on depth vision - Google Patents

Anti-collision control method, device, medium and terminal based on depth vision Download PDF

Info

Publication number
CN111687829B
CN111687829B CN201910193711.8A CN201910193711A CN111687829B CN 111687829 B CN111687829 B CN 111687829B CN 201910193711 A CN201910193711 A CN 201910193711A CN 111687829 B CN111687829 B CN 111687829B
Authority
CN
China
Prior art keywords
target object
operable
collision
collision safety
safety area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910193711.8A
Other languages
Chinese (zh)
Other versions
CN111687829A (en
Inventor
吴俊伟
何雪萦
梁志远
蔡伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Chuangshi Intelligent Technology Co ltd
Original Assignee
Suzhou Chuangshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Chuangshi Intelligent Technology Co ltd filed Critical Suzhou Chuangshi Intelligent Technology Co ltd
Priority to CN201910193711.8A priority Critical patent/CN111687829B/en
Publication of CN111687829A publication Critical patent/CN111687829A/en
Application granted granted Critical
Publication of CN111687829B publication Critical patent/CN111687829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The application provides an anti-collision control method, device, medium and terminal based on depth vision, wherein the method comprises the following steps: acquiring three-dimensional data of a target object and an operation part of an operable mobile device; determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operation part of the operable mobile device according to the three-dimensional data; and establishing a boundary line surrounding the target object in the overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable moving device moves to the boundary line towards the direction approaching the target object, the operating part of the operable moving device is adjusted to be opposite to the target object. The application determines the safe areas of the target object and the mechanical arm, controls the advancing speed of the mechanical arm, creates a dividing line for the mechanical arm to adjust the grasping gesture of the mechanical arm, and enables the mechanical arm to move towards the target object in the gesture that the palm of the hand is right opposite to the target object, thereby effectively solving the problem of collision with the target object.

Description

Anti-collision control method, device, medium and terminal based on depth vision
Technical Field
The application relates to the field of control based on a visual system, in particular to an anti-collision control method, device, medium and terminal based on depth vision.
Background
A manipulator is an automatic operating device that mimics certain motion functions of a human hand and arm for grasping, handling objects or operating tools in a fixed program. The method is characterized in that various expected operations can be completed through programming, and the method has the advantages of both human and manipulator machines in terms of construction and performance.
However, in the prior art, the manipulator often collides with the target object during the task of gripping, carrying the object or operating the tool.
Content of the application
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide a depth vision-based anti-collision control method, device, medium, and terminal for solving the problem that the existing manipulator frequently collides with the target object during the task of grabbing, carrying the object, or operating the tool.
To achieve the above and other related objects, a first aspect of the present application provides a depth vision-based collision avoidance control method, comprising: acquiring three-dimensional data of a target object and an operation part of an operable mobile device; determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operation part of the operable mobile device according to the three-dimensional data; establishing a boundary line surrounding the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable moving device moves to the boundary line towards a direction approaching the target object, the operating part of the operable moving device is adjusted to a posture opposite to the target object; wherein the overlapping region includes an area where the operable mobile device is near but not touching the target object.
In some embodiments of the first aspect of the present application, the method comprises: when the operable moving device moves to a position area where the first anti-collision safety area and the second anti-collision safety area are tangential in a direction approaching the target object, the moving speed of the operable moving device is reduced and/or the depth vision sampling frequency is increased.
In some embodiments of the first aspect of the present application, the determining manner of the first anti-collision safety area includes: and determining a first anti-collision safety area by taking the geometric center of the target object as the center.
In some embodiments of the first aspect of the application, the type of operable mobile device comprises a mobile robotic arm; the mobile robot arm includes a robot gripper for gripping a target object.
In some embodiments of the first aspect of the present application, the determining manner of the second anti-collision safety area includes: and determining a second anti-collision safety area by taking the palm of the manipulator as the center.
In some embodiments of the first aspect of the present application, when the mobile mechanical arm moves to the boundary line in a direction approaching to the target object, the mechanical arm is adjusted to a posture that the palm of the hand is facing the target object.
To achieve the above and other related objects, a second aspect of the present application provides an operable mobile apparatus comprising: a depth vision module for acquiring three-dimensional data of the target object and an operation part of the operable mobile device; a control module for determining a first anti-collision safety area of a target object and a second anti-collision safety area of an operation part of the operable mobile device based on three-dimensional data of the target object and the operable mobile device, and for establishing a boundary line around the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area so that the operable mobile device adjusts the operation part thereof to a posture facing the target object when moving to the boundary line in a direction approaching the target object; wherein the overlapping region includes an area where the operable mobile device is near but not touching the target object.
In some implementations of the second aspect of the application, the depth vision module includes a depth camera; the depth camera performs object detection based on the point features to acquire three-dimensional data of a target object and an operation portion of the operable mobile device.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the depth vision-based collision avoidance control method.
To achieve the above and other related objects, the present application provides an electronic terminal comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory so as to enable the terminal to execute the anti-collision control method based on the depth vision.
As described above, the anti-collision control method, device, medium and terminal based on depth vision have the following beneficial effects: the application determines the safe areas of the target object and the mechanical arm, controls the advancing speed of the mechanical arm, creates a dividing line for the mechanical arm to adjust the grasping gesture of the mechanical arm, and enables the mechanical arm to move towards the target object in the gesture that the palm of the hand is right opposite to the target object, thereby effectively solving the problem of collision with the target object.
Drawings
Fig. 1a is a schematic diagram of an application scenario of an operable mobile device according to an embodiment of the application.
Fig. 1b is a schematic diagram of an application scenario of an operable mobile device according to an embodiment of the application.
Fig. 1c is a schematic diagram of an application scenario of an operable mobile device according to an embodiment of the application.
Fig. 2 is a schematic flow chart of an anti-collision control method based on depth vision according to an embodiment of the application.
Fig. 3 is a schematic flow chart of an anti-collision control method based on depth vision according to an embodiment of the application.
Fig. 4 is a schematic block diagram of an operable mobile apparatus according to an embodiment of the application.
Fig. 5 is a schematic structural diagram of an electronic terminal according to an embodiment of the application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
In the present application, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," "held," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
A manipulator is an automatic operating device that mimics certain motion functions of a human hand and arm for grasping, handling objects or operating tools in a fixed program. The method is characterized in that various expected operations can be completed through programming, and the method has the advantages of both human and manipulator machines in terms of construction and performance. However, in the conventional robot hand, a problem of collision with a target object often occurs during the task of gripping, carrying an object, or handling a tool.
In view of the above-mentioned problems existing in the prior art, the main idea of the present application is to provide a depth vision-based anti-collision control method, device, medium, and terminal for determining a safe area of a target object and a robot arm, controlling a traveling speed of the robot arm, creating a boundary line for the robot arm to adjust a gripping posture of the robot arm, so that the robot arm can move toward the target object in a posture that a palm of the robot arm is right against the target object, thereby effectively solving the problem of collision with the target object.
For the convenience of understanding of those skilled in the art, the technical scheme of the present application will be described in detail with reference to fig. 1a to 1 c. As shown in fig. 1a, an application scenario of the present application is shown in an embodiment, in which the target object 11 is a beverage bottle placed on a table, and the operable moving device 12 includes a robot arm 13 and a robot gripper 14. A robot hand 14 is attached to an end of the robot arm 13, and its posture can be adjusted to be more suitable for gripping the target object 11. The mobile device 12 is also provided with a depth camera, not shown, inside for acquiring three-dimensional data of the subject with depth information.
In some embodiments of the present application, as shown in fig. 1b, a first anti-collision safety zone 15 is established centered on the geometric center of the target object 11, and a second anti-collision safety zone 16 is established centered on the palm of the manipulator grip 14. As can be seen in connection with fig. 1a and 1b, the operable moving device 12 is rapidly movable in a direction approaching the target object 11 when the first and second anti-collision safety regions 15, 16 are not overlapping. And when the first and second anti-collision safety zones 15, 16 are tangential, the mobile device 12 is operable to reduce its rate of movement and/or to increase the sampling frequency of the depth camera.
In some embodiments of the present application, as shown in fig. 1c, a dividing line 17 is established around the target object 11. As can be seen from fig. 1a to 1c, when the operable moving apparatus 12 continues to move toward the target object 11 and reaches around the boundary line 17, the manipulator 14 is adjusted to a posture in which the palm of the hand is facing the target object 11, and slowly travels toward the target object 11, so that the target object 11 can be smoothly grasped without collision.
The application scenario of the present application in an embodiment is described in detail above. Hereinafter, the technical scheme of the present application will be further explained in connection with a depth vision-based anti-collision control method.
As shown in fig. 2, a flow chart of an anti-collision control method based on depth vision in an embodiment of the application is shown.
In some embodiments, the anti-collision control method may be applied to a controller, for example: ARM controller, FPGA controller, soC controller, DSP controller, or MCU controller etc.. In some embodiments, the anti-collision control method may also be applied to a computer including components such as memory, a memory controller, one or more processing units (CPUs), peripheral interfaces, RF circuitry, audio circuitry, speakers, microphones, input/output (I/O) subsystems, display screens, other output or control devices, and external ports; the computer includes, but is not limited to, a personal computer such as a desktop computer, a notebook computer, a tablet computer, a smart phone, a smart television, a personal digital assistant (Personal Digital Assistant, PDA for short), and the like. In other embodiments, the anti-collision control method may be applied to servers, where the servers may be disposed on one or more physical servers according to various factors such as functions, loads, and the like, and may also be formed by a distributed or centralized server cluster.
In this embodiment, the depth vision based anti-collision control method includes step S21, step S22, and step S23.
In step S21, three-dimensional data of the target object and the operation section of the operable mobile apparatus are acquired.
In some embodiments of the present application, three-dimensional data of a target object and an operation portion of an operable mobile device are acquired using a depth camera. The depth camera can acquire depth information of the photographing object, that is, three-dimensional position and size information, in addition to a planar image of the photographing object, thereby acquiring three-dimensional data of the target object and the surrounding environment of the operable mobile device.
Optionally, the depth camera performs object detection based on the point features to acquire three-dimensional data of the target object and the operation part of the operable mobile device. The point feature-based image registration and matching method comprises the steps of performing image registration and matching by utilizing the point feature, performing object description and recognition, performing light beam calculation, performing moving object tracking and recognition, performing 3D modeling of a stereoscopic image and the like.
Object detection is performed based on the point features, hundreds of point features can be detected in the image generally, and the mode of object detection based on the point features has good robustness because the point features belong to local features. In addition, the mode of object detection based on the point features has better identification and is easy to distinguish points on different objects.
In step S22, a first collision avoidance area of the target object and a second collision avoidance area of the operating section of the operable mobile device are determined from the three-dimensional data.
In some embodiments of the application, the first collision avoidance zone is defined centered on the geometric center of the target object. The first anti-collision safety region may be a region having a shape similar to the geometric shape of the target object, or may be a circular region, a triangular region, a square region, or an irregularly shaped region, which is not limited in the present application.
In some embodiments of the application, the second anti-collision safety zone is defined centered on the palm of the hand of the manipulator. The second anti-collision safety area may be an area with a shape similar to the geometric shape of the manipulator, or may be a circular area, a triangular area, a square area, or an irregularly shaped area, which is not limited in the present application.
In step S23, a boundary line surrounding the target object is established in an overlapping region of the first anti-collision safety region and the second anti-collision safety region, so that the operating portion of the operable moving device is adjusted to a posture facing the target object when the operable moving device moves to the boundary line in a direction approaching the target object. The overlapping area in the application comprises the condition that the first anti-collision safety area and the second anti-collision safety area are intersected, and also comprises the condition that the first anti-collision safety area and the second anti-collision safety area are tangent.
It should be noted that when the operable moving device moves to the direction approaching the target object, the operable moving device approaches to but does not touch the target object, so that before the operable moving device touches the target object, the gesture of the manipulator is adjusted to be opposite to the target object in time, the manipulator is ensured to touch the target object in a correct gesture, and thus the operation tasks such as grabbing, pressing, or carrying are smoothly executed.
As shown in fig. 3, a flow chart of an anti-collision control method based on depth vision in another embodiment of the application is shown. In this embodiment, the anti-collision control method includes step S31, step S32, step S33, and step S34.
In step S31, three-dimensional data of the target object and the operation section of the operable mobile apparatus are acquired.
In step S32, a first collision avoidance area of the target object and a second collision avoidance area of the operating section of the operable mobile device are determined from the three-dimensional data.
Step S31 and step S32 in this embodiment are similar to the implementation of step S21 and step S22 in the previous embodiment, and thus will not be described again.
In step S33, when the operable moving device moves to a position area where the first anti-collision safety area and the second anti-collision safety area are tangential in a direction approaching the target object, the moving rate of the operable moving device is reduced and/or the depth vision sampling frequency is increased.
In the event that the first and second impact safe areas are separated, the operable mobile device typically moves at a higher rate, thereby increasing the efficiency with which the operable mobile device performs tasks. However, as the operable mobile device gets closer to the target object, the probability of collision with the target object becomes higher.
Thus, the present embodiment sets a region for adjusting the movement rate and/or the sampling frequency of the depth camera. That is, when the operable moving device moves to a position area where the first anti-collision safety area and the second anti-collision safety area are tangential in a direction approaching the target object, the operable moving device reduces the moving speed thereof to enable the mechanical arm to move towards the target object at a lower speed, or increases the sampling frequency of the depth camera to improve the image recognition efficiency, or decreases the moving speed thereof and increases the sampling frequency of the depth camera, which is not limited in the present application.
In step S34, establishing a boundary line around the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that the operating part of the operable moving device is adjusted to a posture opposite to the target object when the operable moving device moves to the boundary line in a direction approaching the target object; wherein the overlapping region includes an area where the operable mobile device is near but not touching the target object.
The overlapping region in the present application includes a case where the first anti-collision safety region and the second anti-collision safety region intersect, and also includes a case where the first anti-collision safety region and the second anti-collision safety region are tangential.
And when the boundary line is positioned in the intersection area of the first anti-collision safety area and the second anti-collision safety area, the operable moving device reduces the moving speed and/or increases the sampling frequency of the depth camera when moving to the tangential position area of the two safety areas towards the direction approaching the target object, and adjusts the gesture of the manipulator to enable the palm to be opposite to the target object when continuing to move to the boundary line towards the direction approaching the target object.
In the case that the boundary line is located in a tangential region of the first anti-collision safety region and the second anti-collision safety region, when the operable moving device moves to a position region where the two safety regions are tangential in a direction approaching the target object, the moving speed is reduced and/or the sampling frequency of the depth camera is increased, and the gesture of the manipulator is adjusted so that the palm of the hand is opposite to the target object.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
As shown in fig. 4, a block diagram of an operable mobile device in an embodiment of the application is shown. The operable mobile device includes: a depth vision module 41 and a control module 42.
The depth vision module 41 is used to acquire three-dimensional data of the target object and the operation part of the operable mobile device. The control module 42 is configured to determine a first anti-collision safety area of the target object and a second anti-collision safety area of an operation part of the operable mobile device according to three-dimensional data of the target object and the operable mobile device, and is further configured to establish a boundary line around the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable mobile device moves to the boundary line in a direction approaching the target object, the operation part of the operable mobile device is adjusted to a posture facing the target object; wherein the overlapping region includes an area where the operable mobile device is near but not touching the target object.
It should be noted that, the operable mobile device provided in this embodiment is similar to the implementation of the anti-collision control method based on depth vision provided above, and thus will not be described again. It should be further noted that, it should be understood that the division of each module of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the control module may be a processing element that is set up separately, may be implemented in a chip of the above-mentioned apparatus, or may be stored in a memory of the above-mentioned apparatus in the form of program codes, and may be called by a processing element of the above-mentioned apparatus and execute the functions of the above-mentioned control module. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital signal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 5 is a schematic structural diagram of another electronic terminal according to an embodiment of the present application. The electronic terminal provided in this example includes: a processor 51, a memory 52, a transceiver 53, a communication interface 54, and a system bus 55; the memory 52 and the communication interface 54 are connected to the processor 51 and the transceiver 53 through the system bus 55 and perform communication with each other, the memory 52 is used for storing a computer program, the communication interface 54 and the transceiver 53 are used for communicating with other devices, and the processor 51 is used for running the computer program to enable the electronic terminal to execute the respective steps of the anti-collision control method based on depth vision as above.
The system bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries). The memory may comprise random access memory (Random Access Memory, RAM) and may also comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In summary, the application provides a depth vision-based anti-collision control method, a device, a medium and a terminal, which can determine a target object and a safe area of a mechanical arm, control the travelling speed of the mechanical arm, and create a boundary line for the mechanical arm to adjust the grasping gesture of the mechanical arm, so that the mechanical arm can move towards the target object in a gesture that the palm of the hand is opposite to the target object, thereby effectively solving the problem of collision with the target object. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (9)

1. A depth vision-based collision avoidance control method, comprising:
acquiring three-dimensional data of a target object and an operation part of an operable mobile device;
determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operation part of the operable mobile device according to the three-dimensional data;
establishing a boundary line surrounding the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area, so that when the operable moving device moves to the boundary line towards a direction approaching the target object, the operating part of the operable moving device is adjusted to a posture opposite to the target object; wherein the overlapping region includes a region where the operable mobile device is near but not touching the target object;
when the operable moving device moves to a position area where the first anti-collision safety area and the second anti-collision safety area are tangent towards a direction approaching the target object, the moving speed of the operable moving device is reduced and/or the depth vision sampling frequency is increased so as to smoothly grab and not collide with the target object.
2. The depth vision-based anti-collision control method according to claim 1, wherein the determining manner of the first anti-collision safety area includes: and determining a first anti-collision safety area by taking the geometric center of the target object as the center.
3. The depth vision based collision avoidance control method of claim 1 wherein the type of operable mobile device comprises a mobile robotic arm; the mobile robot arm includes a robot gripper for gripping a target object.
4. A depth vision based anti-collision control method according to claim 3, in which the determination of the second anti-collision safety zone comprises: and determining a second anti-collision safety area by taking the palm of the manipulator as the center.
5. The depth vision based anti-collision control method according to claim 3, wherein when the mobile robot arm moves to the boundary line in a direction approaching the target object, the robot arm is adjusted to a posture in which the palm of the hand is facing the target object.
6. An operable mobile device, comprising:
a depth vision module for acquiring three-dimensional data of the target object and an operation part of the operable mobile device;
the control module is used for determining a first anti-collision safety area of the target object and a second anti-collision safety area of an operation part of the operable mobile device according to three-dimensional data of the target object and the operable mobile device, and is also used for establishing a boundary line around the target object in an overlapping area of the first anti-collision safety area and the second anti-collision safety area so as to adjust the operation part of the operable mobile device to a gesture opposite to the target object when the operable mobile device moves to the boundary line towards a direction approaching the target object; wherein the overlapping region includes a region where the operable mobile device is near but not touching the target object; when the operable moving device moves to a position area where the first anti-collision safety area and the second anti-collision safety area are tangent towards a direction approaching the target object, the moving speed of the operable moving device is reduced and/or the depth vision sampling frequency is increased so as to smoothly grab and not collide with the target object.
7. The operable mobile apparatus of claim 6, wherein the depth vision module comprises a depth camera; the depth camera performs object detection based on the point features to acquire three-dimensional data of a target object and an operation portion of the operable mobile device.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the depth vision based collision avoidance control method according to any one of claims 1 to 5.
9. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so that the terminal executes the depth vision-based collision avoidance control method according to any one of claims 1 to 5.
CN201910193711.8A 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision Active CN111687829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910193711.8A CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910193711.8A CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Publications (2)

Publication Number Publication Date
CN111687829A CN111687829A (en) 2020-09-22
CN111687829B true CN111687829B (en) 2023-10-20

Family

ID=72474478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910193711.8A Active CN111687829B (en) 2019-03-14 2019-03-14 Anti-collision control method, device, medium and terminal based on depth vision

Country Status (1)

Country Link
CN (1) CN111687829B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117681214B (en) * 2024-02-04 2024-04-12 泓浒(苏州)半导体科技有限公司 Wafer transfer-based multi-mechanical arm collision early warning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559711A (en) * 2013-11-05 2014-02-05 余洪山 Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
CN104870147A (en) * 2012-08-31 2015-08-26 睿信科机器人有限公司 Systems and methods for safe robot operation
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN107932560A (en) * 2017-11-14 2018-04-20 上海交通大学 A kind of man-machine safety guard system and means of defence
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN108858251A (en) * 2018-08-30 2018-11-23 东北大学 A kind of collision avoidance system of high-speed motion manipulator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI615691B (en) * 2016-11-24 2018-02-21 財團法人資訊工業策進會 Anti-collision system and anti-collision method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104870147A (en) * 2012-08-31 2015-08-26 睿信科机器人有限公司 Systems and methods for safe robot operation
CN103559711A (en) * 2013-11-05 2014-02-05 余洪山 Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN107932560A (en) * 2017-11-14 2018-04-20 上海交通大学 A kind of man-machine safety guard system and means of defence
CN108247637A (en) * 2018-01-24 2018-07-06 中南大学 A kind of industrial machine human arm vision anticollision control method
CN108858251A (en) * 2018-08-30 2018-11-23 东北大学 A kind of collision avoidance system of high-speed motion manipulator

Also Published As

Publication number Publication date
CN111687829A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN108044627B (en) Method and device for detecting grabbing position and mechanical arm
CN107138432B (en) Method and apparatus for sorting non-rigid objects
CN109773776B (en) Gripping method, gripping system, and storage medium
US11833692B2 (en) Method and device for controlling arm of robot
CN112837371A (en) Object grabbing method and device based on 3D matching and computing equipment
CN110599544A (en) Workpiece positioning method and device based on machine vision
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
CN115781673A (en) Part grabbing method, device, equipment and medium
CN112775967A (en) Mechanical arm grabbing method, device and equipment based on machine vision
CN111687829B (en) Anti-collision control method, device, medium and terminal based on depth vision
Prezas et al. AI-enhanced vision system for dispensing process monitoring and quality control in manufacturing of large parts
CN112464410B (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
Teke et al. Real-time and robust collaborative robot motion control with Microsoft Kinect® v2
CN109848968B (en) Movement control method, device, equipment and system of grabbing device
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
Bhuyan et al. Structure‐aware multiple salient region detection and localization for autonomous robotic manipulation
CN111702761A (en) Control method and device of palletizing robot, processor and sorting system
Fontana et al. Flexible vision based control for micro-factories
US11823414B2 (en) Information processing device, information processing method, and information processing non-transitory computer readable medium
JP2018116397A (en) Image processing device, image processing system, image processing program, and image processing method
TWI770726B (en) Method and system for controlling a handling machine and non-volatile computer readable recording medium
WO2018077250A1 (en) Method and apparatus for determining motion parameters
Kang et al. Multiple concurrent operations and flexible robotic picking for manufacturing process environments
CN117226854B (en) Method and device for executing clamping task, storage medium and electronic equipment
Rautiainen Design and Implementation of a Multimodal System for Human-Robot Interactions in Bin-Picking Operations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant