CN111993448B - Robotic multi-gripper assembly and method for gripping and holding objects - Google Patents

Robotic multi-gripper assembly and method for gripping and holding objects Download PDF

Info

Publication number
CN111993448B
CN111993448B CN202010769547.3A CN202010769547A CN111993448B CN 111993448 B CN111993448 B CN 111993448B CN 202010769547 A CN202010769547 A CN 202010769547A CN 111993448 B CN111993448 B CN 111993448B
Authority
CN
China
Prior art keywords
vacuum
gripper assembly
objects
sensor device
end effector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010769547.3A
Other languages
Chinese (zh)
Other versions
CN111993448A (en
Inventor
沟口弘悟
鲁仙·出杏光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mujin Technology
Original Assignee
Mujin Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/855,751 external-priority patent/US11345029B2/en
Application filed by Mujin Technology filed Critical Mujin Technology
Priority claimed from CN202010727610.7A external-priority patent/CN112405570A/en
Publication of CN111993448A publication Critical patent/CN111993448A/en
Application granted granted Critical
Publication of CN111993448B publication Critical patent/CN111993448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/0052Gripping heads and other end effectors multiple gripper units or multiple end effectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/06Gripping heads and other end effectors with vacuum or magnetic holding means
    • B25J15/0616Gripping heads and other end effectors with vacuum or magnetic holding means with vacuum
    • B25J15/0625Gripping heads and other end effectors with vacuum or magnetic holding means with vacuum provided with a valve
    • B25J15/0633Air-flow-actuated valves
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

A method for operating a transport robot includes receiving image data representing a set of objects. One or more target objects in the set are identified based on the received image data. An addressable vacuum region is selected based on the identified one or more target objects. Commanding the transport robot to cause the selected addressable vacuum region to hold and transport the identified one or more target objects. The transport robot includes a multi-gripper assembly having an array of addressable vacuum regions, each addressable vacuum region configured to independently provide a vacuum. A vision sensor device may capture the image data representing the target object adjacent to or held by the multi-gripper assembly.

Description

Robotic multi-gripper assembly and method for gripping and holding objects
The present application is a divisional application of chinese application CN202010727610.7, filed on 24/7/2020 entitled "robotic multi-gripper assembly and method for gripping and holding objects".
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application No. 62/889,562, filed on 8/21/2019, which is incorporated herein by reference in its entirety.
Technical Field
The present technology is directed, in general, to robotic systems and, more specifically, to robotic multi-gripper assemblies configured to selectively grip and hold objects.
Background
Robots (e.g., machines configured to automatically/autonomously perform physical actions) are now widely used in many fields. For example, robots may be used to perform various tasks (e.g., manipulating or transferring objects) in connection with manufacturing, packaging, transporting, and/or shipping, among other things. In performing tasks, the robot may replicate human actions, thereby replacing or reducing human involvement otherwise required to perform dangerous or repetitive tasks. Robots often lack the advancement necessary to replicate the human sensitivity and/or adaptability needed to perform more complex tasks. For example, it is often difficult for a robot to selectively grip one or more objects from a group of objects having close proximity to the objects, irregularly shaped/sized objects, and the like. Accordingly, there remains a need for improved robotic systems and techniques for controlling and managing various aspects of a robot.
Drawings
FIG. 1 is an illustration of an exemplary environment in which a robotic system transports objects in accordance with one or more embodiments of the present technique.
FIG. 2 is a block diagram illustrating a robotic system in accordance with one or more embodiments of the present technique.
Fig. 3 illustrates a multi-component transfer assembly in accordance with one or more embodiments of the present technique.
Fig. 4 is a front view of an end effector coupled to a robotic arm of a transport robot in accordance with one or more embodiments of the present technique.
Fig. 5 is a bottom view of the end effector of fig. 4.
Fig. 6 is a functional block diagram of a robotic transfer assembly in accordance with one or more embodiments of the present technique.
Fig. 7 is a front isometric top view of an end effector with a multi-gripper assembly in accordance with one or more embodiments of the present technique.
Fig. 8 is a front isometric bottom view of the end effector of fig. 7.
Fig. 9 is an exploded isometric front view of components of a vacuum gripper assembly in accordance with one or more embodiments of the present technique.
Fig. 10 is an isometric view of an assembly of vacuum grippers in accordance with one or more embodiments of the present technology.
FIG. 11 is a top plan view of the assembly of FIG. 10;
fig. 12 is an isometric view of an assembly of vacuum grippers in accordance with one or more embodiments of the present technique.
Fig. 13 is an isometric view of a multi-gripper assembly in accordance with another embodiment of the present technique.
Fig. 14 is an exploded isometric view of the multi-gripper assembly of fig. 13.
FIG. 15 is a partial cross-sectional view of a portion of a multi-gripper assembly in accordance with one or more embodiments of the present technique.
Fig. 16 is a flow diagram for operating a robotic system, in accordance with some embodiments of the present technique.
FIG. 17 is another flow diagram for operating a robotic system in accordance with one or more embodiments of the present technique.
Fig. 18-21 illustrate various stages of gripping and transporting an object in a robotic fashion in accordance with one or more embodiments of the present technique.
Detailed Description
Systems and methods for gripping a selected object are described herein. The system may include a transport robot having a multi-gripper assembly configured to operate independently or in conjunction to grip/release a single object or multiple objects. For example, the system may pick up multiple objects simultaneously or sequentially. The system may select the object to be carried based on, for example, the carrying capacity of the multi-gripper assembly, the transport plan, or a combination thereof. The multi-gripper assembly can reliably grip objects in a group of objects, irregular objects, objects having various shapes/sizes, and the like. For example, the multi-gripper assembly may include addressable vacuum regions or rows, each configured to draw in air such that a selected object is held only via a vacuum gripping force. The multi-gripper assembly may be moved by a robotic form to transport the held object to a desired location, and then the object may be released. The system may also release the clamped objects simultaneously or sequentially. This process may be repeated to transport any number of objects between different locations.
At least some embodiments are directed to a method for operating a transport robot having a multi-gripper assembly with an addressable pick-up area. The pick-up area may be configured to independently provide vacuum clamping. One or more target objects are identified based on the captured image data. The pick-up area may draw in air to grip one or more identified target objects. In some embodiments, the transport robot is configured to move a multi-gripper assembly in the form of a robot, the multi-gripper assembly carrying the identified target object.
In some embodiments, a robotic transport system includes a robotic device, a target object detector, and a vacuum gripper apparatus. The vacuum gripper apparatus includes a plurality of addressable regions and a manifold assembly. The manifold assembly can be fluidly coupled to each of the addressable regions and the at least one vacuum line such that each addressable region is capable of independently providing a negative pressure via the array of suction elements. The negative pressure may be sufficient to hold the at least one target object on the vacuum gripper arrangement when the robotic device moves the vacuum gripper arrangement between the different positions.
A method for operating a transport robot includes receiving image data representing a set of objects (e.g., a pile or batch of objects). One or more target objects in the set are identified based on the received image data. An addressable vacuum region is selected based on the identified one or more target objects. The transport robot is commanded to cause the selected vacuum region to hold and transport the identified one or more target objects. The transport robot includes a multi-gripper assembly having an array of vacuum regions, each vacuum region configured to independently provide vacuum gripping. The vision sensor device may capture image data representing a target object adjacent to or held by the vacuum gripper device.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the presently disclosed technology. In other embodiments, the techniques described herein may be practiced without these specific details. In other instances, well-known features, such as specific functions or routines, are not described in detail to avoid unnecessarily obscuring the present disclosure. Reference in the specification to "one embodiment," "an embodiment," or the like, means that a particular feature, structure, material, or characteristic described is included in at least one embodiment of the disclosure. Thus, appearances of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, such references are not necessarily mutually exclusive. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. It is to be understood that the various embodiments shown in the figures are merely illustrative representations and are not necessarily drawn to scale.
For the sake of clarity, several details describing structures or processes that are well known and commonly associated with robotic systems and subsystems but which may unnecessarily obscure some important aspects of the disclosed technology are not set forth in the following description. Furthermore, while the following disclosure sets forth several embodiments of different aspects of the technology, several other embodiments may have different configurations or components than those described in this section. Accordingly, the disclosed technology may have other embodiments with or without several elements described below.
Many embodiments or aspects of the disclosure described below may take the form of computer or controller executable instructions, including routines executed by a programmable computer or controller. One skilled in the relevant art will appreciate that the disclosed techniques can be practiced on computers or controller systems other than those shown and described below. The techniques described herein may be embodied in a special purpose computer or data processor that is specifically programmed, configured, or structured to execute one or more of the computer-executable instructions described below. Thus, the terms "computer" and "controller" as generally used herein refer to any data processor and may include internet appliances and handheld devices (including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, minicomputers, and the like). The information handled by these computers and controllers may be presented on any suitable display medium, including a Liquid Crystal Display (LCD). Instructions for performing computer or controller-executable tasks may be stored in or on any suitable computer-readable medium including hardware, firmware, or a combination of hardware and firmware. The instructions may be embodied in any suitable memory device, including, for example, a flash drive, a USB device, and/or other suitable media, including tangible, non-transitory computer-readable media.
The terms "coupled" and "connected," along with their derivatives, may be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct contact with each other. Unless otherwise apparent from the context, the term "coupled" may be used to indicate that two or more elements are in direct or indirect contact with each other (with other intervening elements between them), or that two or more elements cooperate or interact with each other (e.g., interact in a causal relationship, such as for signal transmission/reception or for function calls), or both.
Applicable environment
Fig. 1 is an illustration of an exemplary environment in which a robotic system 100 transports objects. The robotic system 100 may include an unloading unit 102, a transfer unit or assembly 104 ("transfer assembly 104"), a transport unit 106, a loading unit 108, or a combination thereof, in a warehouse or distribution/shipping center. Each of the units of the robotic system 100 may be configured to perform one or more tasks. The tasks may be combined in sequence to perform an operation that achieves the goal, such as unloading objects from a truck or van to be stored in a warehouse, or unloading objects from a storage location and loading them onto a truck or van for shipment. In another example, the task may include moving an object from one container to another container. Each of the units may be configured to perform a series of actions (e.g., operate one or more components thereof) to perform a task.
In some embodiments, the task may include manipulation (e.g., movement and/or reorientation) of the target object or package 112 (e.g., box, case, cage, tray, etc.) from a start position 114 to a task position 116. For example, the unloading unit 102 (e.g., an unpacking robot) may be configured to transfer the targeted package 112 from a location in a vehicle (e.g., a truck) to a location on a conveyor belt. The transfer assembly 104 (e.g., a palletizing robot assembly) may be configured to load the packages 112 onto the transport unit 106 or the conveyor 120. In another example, transfer component 104 can be configured to transfer one or more targeted packages 112 from one container to another. The transfer assembly 104 may include a robotic end effector 140 ("end effector 140") with vacuum grippers (or vacuum zones), each operating individually to pick up and carry one or more objects 112. When end effector 140 is placed in proximity to an object, air may enter one or more grippers adjacent to target package 112, thereby creating a pressure differential sufficient to hold the target object. The target object can be picked up and transported without damaging or destroying the surface of the object. The number of packages 112 shipped at one time may be selected based on the stacking arrangement of objects at the pick-up location, the available space at the drop-off location, the transport path between the pick-up location and the drop-off location, optimization routines (e.g., routines for optimizing unit usage, robot usage, etc.), combinations thereof, and the like. The end effector 140 may have one or more sensors configured to output readings indicative of information about the held objects (e.g., the number and configuration of held objects), the relative position between any held objects, and the like.
The imaging system 160 may provide image data for monitoring component operation, identifying a target object, tracking an object, or otherwise performing a task. The image data may be analyzed to evaluate, for example, a package stacking arrangement (e.g., stacked packages such as cartons, packing containers, etc.), location information of the object, available transport paths (e.g., transport paths between a pick-up area and a drop-off area), location information about the gripping assembly, or a combination thereof. The controller 109 may be in communication with the imaging system 160 and other components of the robotic system 100. The controller 109 may generate a transportation plan that includes a sequence for picking and dropping objects (e.g., shown as stable containers), positioning information, sequential information for picking objects, sequential information for dropping objects, a stacking plan (e.g., a plan for stacking objects at a drop zone), a restacking plan (e.g., a plan for restacking at least some containers at a pick zone), or a combination thereof. The information and instructions provided by the transportation plan may be selected based on the arrangement of the containers, the contents of the containers, or a combination thereof. In some embodiments, the controller 109 may include electronic/electrical devices, such as one or more processing units, processors, storage devices (e.g., external or internal storage devices, memory, etc.), communication devices (e.g., communication devices for wireless or wired connections), and input-output devices (e.g., screens, touch screen displays, keyboards, keypads, etc.). Exemplary electrical/electronic devices and controller components are discussed in connection with fig. 2 and 6.
The shipping unit 106 may transfer the target package 112 (or multiple target packages 112) from an area associated with the transfer assembly 104 to an area associated with the loading unit 108, and the loading unit 108 may transfer the target package 112 (by, for example, moving a tray carrying the target package 112) to a storage location. In some embodiments, the controller 109 may coordinate the operation of the transfer assembly 104 and the transport unit 106 to efficiently load objects onto the storage racks.
The robotic system 100 may include other units not shown in fig. 1, such as manipulators, service robots, modular robots, and the like. For example, in some embodiments, the robotic system 100 may include: an unstacking unit for transferring objects from a cage car or pallet onto a conveyor or other pallet; a container switching unit for transferring an object from one container to another container; a packaging unit for packaging an object; a sorting unit for grouping the objects according to one or more characteristics of the objects; a picking unit for manipulating (e.g., sorting, grouping, and/or transferring) objects differently depending on one or more characteristics of the objects; or a combination thereof. The components and subsystems of the system 100 may include different types of end effectors. For example, the unload unit 102, transport unit 106, load unit 108, and other components of the robotic system 100 may also include a robotic multi-gripper assembly. The configuration of the robotic gripper assembly may be selected based on the desired carrying capacity. For illustrative purposes, the robotic system 100 is described in the context of a shipping center; however, it should be understood that the robotic system 100 may be configured to perform tasks in other environments/for other purposes (such as for manufacturing, assembly, packaging, healthcare, and/or other types of automation). Details regarding the tasks and associated actions are described below.
Robot system
Fig. 2 is a block diagram illustrating components of a robotic system 100 in accordance with one or more embodiments of the present technique. In some embodiments, for example, the robotic system 100 (e.g., at one or more of the aforementioned units or components and/or robots) may include electronic/electrical devices, such as one or more processors 202, one or more storage devices 204, one or more communication devices 206, one or more input-output devices 208, one or more actuation devices 212, one or more transport motors 214, one or more sensors 216, or a combination thereof. The various devices may be coupled to each other via wired and/or wireless connections. For example, the robotic system 100 may include a bus, such as a system bus, a Peripheral Component Interconnect (PCI) bus, or a PCI-Express bus, a HyperTransport or Industry Standard Architecture (ISA) bus, a Small Computer System Interface (SCSI) bus, a Universal Serial Bus (USB), an IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as a "firewire"). Also, for example, the robotic system 100 may include a bridge, adapter, controller, or other signal-related device for providing a wired connection between devices. The wireless connection may be based on, for example, a cellular communication protocol (e.g., 3G, 4G, LTE, 5G, etc.), a wireless Local Area Network (LAN) protocol (e.g., wireless fidelity (WIFI)), a peer-to-peer or device-to-device communication protocol (e.g., bluetooth, Near Field Communication (NFC), etc.), an internet of things (IoT) protocol (e.g., NB-IoT, Zigbee, Z-wave, LTE-M, etc.), and/or other wireless communication protocols.
The processor 202 may include a data processor (e.g., a Central Processing Unit (CPU), a special purpose computer, and/or an onboard server) configured to execute instructions (e.g., software instructions) stored on a storage device 204 (e.g., computer memory). The processor 202 may implement program instructions to control/interface with other devices, thereby causing the robotic system 100 to perform actions, tasks, and/or operations.
The storage 204 may include a non-transitory computer-readable medium having stored thereon program instructions (e.g., software). Some examples of storage 204 may include volatile memory (e.g., cache memory and/or Random Access Memory (RAM)) and/or non-volatile memory (e.g., flash memory and/or a disk drive). Other examples of storage 204 may include a portable memory drive and/or a cloud storage.
In some embodiments, the storage 204 may be used to further store and provide access to master data, processing results, and/or predetermined data/thresholds. For example, the storage device 204 may store master data that includes a description of objects (e.g., boxes, cases, containers, and/or products) that may be manipulated by the robotic system 100. In one or more embodiments, the master data may include the size, shape (e.g., templates for potential poses and/or computer-generated models for recognizing objects in different poses), quality/weight information, color schemes, images, identification information (e.g., bar codes, Quick Response (QR) codes, logos, etc., and/or their expected locations), expected quality or weight, or a combination thereof, of the objects expected to be manipulated by the robotic system 100. In some embodiments, the master data may include maneuver related information about the objects, such as a centroid location of each of the objects, expected sensor measurements (e.g., force, torque, pressure, and/or contact measurements) corresponding to one or more actions/maneuvers, or a combination thereof. The robotic system may look for pressure levels (e.g., vacuum levels, suction levels, etc.), gripping/picking areas (e.g., vacuum gripper areas or rows to activate), and other stored master data for controlling the transfer robot. The storage device 204 may also store object tracking data. In some embodiments, the object tracking data may include a log of the objects being scanned or manipulated. In some embodiments, the object tracking data may include image data (e.g., pictures, point clouds, real-time video feeds, etc.) of the object at one or more locations (e.g., designated pick or drop locations and/or conveyor belts). In some embodiments, the object tracking data may include a position and/or orientation of the object at one or more locations.
The communication device 206 may include circuitry configured to communicate with an external or remote device via a network. For example, the communication device 206 may include a receiver, transmitter, modulator/demodulator (modem), signal detector, signal encoder/decoder, connector port, network card, and the like. The communication device 206 may be configured to transmit, receive, and/or process electrical signals according to one or more communication protocols (e.g., Internet Protocol (IP), wireless communication protocols, etc.). In some embodiments, the robotic system 100 may use the communication device 206 to exchange information between elements of the robotic system 100 and/or with systems or devices external to the robotic system 100 (e.g., for reporting, data collection, analysis, and/or troubleshooting purposes).
The input-output devices 208 may include user interface devices configured to communicate information to and/or receive information from an operator. For example, input-output devices 208 may include a display 210 and/or other output devices (e.g., speakers, haptic circuits, or haptic feedback devices, etc.) for communicating information to an operator. Also, the input-output devices 208 may include control or receiving devices, such as a keyboard, mouse, touch screen, microphone, User Interface (UI) sensors (e.g., a camera for receiving motion commands), wearable input devices, and so forth. In some embodiments, the robotic system 100 may use the input-output devices 208 to interact with an operator when performing an action, task, operation, or a combination thereof.
In some embodiments, a controller (e.g., controller 109 of fig. 1) may include a processor 202, a storage device 204, a communication device 206, and/or an input-output device 208. The controller may be a separate component or part of a unit/assembly. For example, each of the unloading units, transfer assemblies, transport units, and loading units of the system 100 may include one or more controllers. In some embodiments, a single controller may control multiple units or independent components.
The robotic system 100 may include physical or structural members (e.g., manipulator robotic arms) connected at joints for movement (e.g., rotational and/or translational displacement). The structural members and joints may form a kinematic chain configured to manipulate an end effector (e.g., gripper) configured to perform one or more tasks (e.g., gripping, spinning, welding, etc.) in accordance with the use/operation of the robotic system 100. The robotic system 100 may include an actuation device 212 (e.g., a motor, an actuator, a wire, an artificial muscle, an electroactive polymer, etc.) configured to drive or manipulate (e.g., displace and/or reorient) the structural member around or at the corresponding joint. In some embodiments, the robotic system 100 may include a transport motor 214 configured to transport a corresponding unit/chassis from one place to another. For example, the actuation device 212 and transport motor are connected to or part of a robotic arm, linear slide, or other robotic component.
The sensors 216 may be configured to obtain information for performing tasks, such as information for manipulating structural members and/or for transporting the robotic unit. The sensors 216 may include devices configured to detect or measure one or more physical properties of the robotic system 100 (e.g., the state, condition, and/or location of one or more structural members/joints thereof) and/or one or more physical properties of the surrounding environment. Some examples of sensors 216 may include contact sensors, proximity sensors, accelerometers, gyroscopes, force sensors, strain gauges, torque sensors, position encoders, pressure sensors, vacuum sensors, and the like.
In some embodiments, for example, the sensor 216 may include one or more imaging devices 222 (e.g., 2-and/or 3-dimensional imaging devices) configured to detect the surrounding environment. The imaging device may include a camera (including a visual and/or infrared camera), a lidar device, a radar device, and/or other ranging or detection devices. The imaging device 222 may generate a representation, such as a digital image and/or a point cloud, of the detected environment for implementing machine/computer vision (e.g., for automated inspection, robotic guidance, or other robotic applications).
Referring now to fig. 1 and 2, the robotic system 100 (e.g., via the processor 202) may process the image data and/or the point cloud to identify the target package 112 of fig. 1, the start location 114 of fig. 1, the task location 116 of fig. 1, the pose of the target package 112 of fig. 1, or a combination thereof. The robotic system 100 may use the image data to determine how to access and pick up the object. The image of the object may be analyzed to determine a pick-up plan for positioning the vacuum gripper assembly to grip the target object, even though adjacent objects may be in close proximity to the gripper assembly. The imaging output from the on-board sensor 216 (e.g., a lidar device) and the image data from a remote device (e.g., the imaging system 160 of fig. 1) may be utilized separately or in combination. The robotic system 100 (e.g., via the various units) may capture and analyze images of a designated area (e.g., the interior of a truck, the interior of a container, or the pick-up location of an object on a conveyor belt) to identify the target package 112 and its starting location 114. Similarly, the robotic system 100 may capture and analyze an image of another designated area (e.g., a drop location for placing objects on a conveyor belt, a location for placing objects inside a container, or a location on a pallet for stacking purposes) to identify the task location 116.
Also, for example, the sensors 216 of fig. 2 may include the position sensors 224 of fig. 2 (e.g., position encoders, potentiometers, etc.) configured to detect the position of structural members (e.g., robotic arms and/or end effectors) and/or corresponding joints of the robotic system 100. The robotic system 100 may use the position sensors 224 to track the position and/or orientation of the structural members and/or joints during task performance. The unloading units, transfer units, transport units/assemblies, and loading units disclosed herein may include sensors 216.
In some embodiments, the sensors 216 may include contact sensors 226 (e.g., force sensors, strain gauges, piezoresistive/piezoelectric sensors, capacitive sensors, elastic resistance sensors, and/or other tactile sensors) configured to measure characteristics associated with direct contact between multiple physical structures or surfaces. Contact sensor 226 may measure a characteristic corresponding to the gripping of target package 112 by an end effector (e.g., gripper). Accordingly, contact sensor 226 may output contact measurements representing quantitative measurements (e.g., measured force, torque, position, etc.) corresponding to physical contact, the degree of contact or attachment between the gripper and target package 112, or other contact characteristics. For example, the contact measurements may include one or more force, pressure, or torque readings associated with the force associated with gripping the target package 112 by the end effector. In some embodiments, the contact measurements may include (1) pressure readings associated with vacuum clamping and (2) force readings (e.g., torque readings) associated with the payload. Details regarding the contact measurement results are described below.
As described in further detail below, the robotic system 100 (e.g., via the processor 202) may implement different actions to complete a task based on contact measurements, image data, a combination thereof, and so forth. For example, if the initial contact measurement is below a threshold, such as the vacuum gripping force is low (e.g., the suction level is below the vacuum threshold), or a combination thereof, the robotic system 100 may re-grip the target package 112. Moreover, robotic system 100 may intentionally deliver targeted parcel 112, adjust task position 116, adjust speed or acceleration of an action, or a combination thereof, based on one or more transport rules (e.g., if contact metric or suction level falls below a threshold during task performance), as well as contact measurements, image data, and/or other readings or data.
Robot transfer assembly
FIG. 3 illustrates a transfer assembly 104 in accordance with one or more embodiments of the present technique. The transfer assembly 104 may include an imaging system 160 and a robotic arm system 132. The imaging system 160 may provide image data captured from the target environment to the destacking platform 110. The robotic arm system 132 may include a robotic arm assembly 139 and an end effector 140 that includes a vision sensor device 143 and a multi-gripper assembly 141 ("gripper assembly 141"). The robot assembly 139 may position the end effector 140 over a group of objects in the stack 165 located at the picking environment 163. The vision sensor device 143 can detect nearby objects without contacting, moving, or removing objects in the stack 165.
The target object may be fixed on the bottom of the end effector 140. In some embodiments, gripper assembly 141 may have addressable areas, each of which is selectively capable of drawing air to provide a vacuum gripping force. In some modes of operation, air is drawn only in the addressable area proximate to the one or more target objects to provide a pressure differential directly between the vacuum gripper apparatus and the one or more target objects. This allows only the selected parcel (i.e., the target parcel) to be pulled against or otherwise secured to the gripper assembly 141 even if other gripping portions of the gripper assembly 141 are adjacent to or in contact with other parcels.
Fig. 3 shows a gripper assembly 141 that carries a single object or package 112 ("package 112") positioned above the conveyor 120. The gripper assemblies 141 may release the packages 112 onto the conveyor belt 120 and the robotic arm system 132 may then pick up the packages 112a, 112b by positioning the unloaded gripper assemblies 141 directly above the two packages 112a, 112 b. The gripper assembly 141 may then hold both parcels 112a, 112b via vacuum gripping force, and the robotic arm system 132 may carry the held parcels 112a, 112b to a location directly above the conveyor 120. Gripper assembly 141 may then release (e.g., simultaneously or sequentially release) the parcels 112a, 112b onto conveyor 120. This process may be repeated any number of times to carry the object from the stack 165 to the conveyor 120.
The vision sensor device 143 may include one or more optical sensors configured to detect packages held under the gripper assembly 141. The vision sensor device 143 may be positioned on the side of the gripper assembly 141 to avoid interfering with package pick/drop. In some embodiments, the vision sensor device 143 is movably coupled to the end effector 140 or the robotic arm 139 such that the vision sensor device 143 can be moved to different sides of the gripper assembly 141 to avoid impacting an object when the presence of one or more objects (if any) held by the gripper assembly 141 is detected. The location, number, and configuration of the vision sensor devices 143 can be selected based on the configuration of the gripper assembly 141.
With continued reference to fig. 3, depalletizing platform 110 may include any platform, surface, and/or structure upon which a plurality of objects or packages 112 (referred to simply as "packages 112") may be stacked and/or stacked and prepared for transport. The imaging system 160 may include one or more imaging devices 161 configured to capture image data of the packages 112 on the destacking platform 110. The imaging device 161 may capture range data, position data, video, still images, lidar data, radar data, and/or motion at the pickup environment or region 163. It should be noted that although the terms "object" and "package" are used herein, the terms include any other article capable of being gripped, lifted, transported, and delivered, such as, but not limited to, "box," "carton," or any combination thereof. Further, although a polygonal box (e.g., a rectangular box) is shown in the figures disclosed herein, the shape of the box is not limited to such a shape, but includes any regular or irregular shape that can be gripped, lifted, transported, and delivered as discussed in detail below.
Like the depalletizing platform 110, the receiving conveyor 120 may include any platform, surface, and/or structure designated to receive the packages 112 for further tasks/operations. In some embodiments, the receiving conveyor 120 may include a conveyor system for transporting the packages 112 from one location (e.g., a release point) to another location for further operations (e.g., sorting and/or storage).
Fig. 4 is a front view of an end effector 140 coupled to a robotic arm 139, in accordance with some embodiments of the present technique. Fig. 5 is a bottom view of the end effector 140 of fig. 4. The vision sensor device 143 may include one or more sensors 145 configured to detect a package; and a calibration plate 147 for calibrating the position of the gripper assembly 141, for example, relative to the vision sensor device 143. In some embodiments, the calibration plate 147 may be a sign having a pattern or design for calibrating or defining the position of the end effector 140 or gripper assembly 141 in the operating environment, the position of the robotic arm 139, or a combination thereof. Gripper assembly 141 may include addressable vacuum zones or regions 117a, 117b, 117c (collectively "vacuum regions 117") that define gripping region 125. Unless otherwise indicated, the description of one vacuum region 117 applies to another vacuum region 117. In some embodiments, each vacuum region 117 may be a row of aspiration channels that include components connected to a vacuum source external to end effector 140. Vacuum region 117 may include gripping interfaces 121 (one identified in fig. 4) on which an object may be held.
Referring now to fig. 4, vacuum zone 117a may draw in air to hold the package 112 and may reduce or stop drawing in air to release the package 112. The vacuum regions 117b, 117c (shown as not holding a parcel) may independently draw in air (indicated by arrows) to hold a parcel at a corresponding location 113a, 113b (shown in dashed lines in fig. 4). Referring now to fig. 5, vacuum region 117 may include a set or row of suction elements 151 (one identified in fig. 5) through which air is drawn. The suction elements 151 may be evenly/uniformly or unevenly spaced apart from one another, and may be arranged in a desired pattern (e.g., an irregular or regular pattern). The vacuum regions 117 may have the same or different number, configuration, and/or pattern of suction elements 151. In order to carry a package that matches the geometry of vacuum region 117, air may be sucked through each suction element 151 of vacuum region 117. To carry smaller parcels, air may be drawn through a subset of suction elements 151 (e.g., suction elements 151 located within the boundary or perimeter of a parcel) that matches the parcel's geometry. For example, air may be drawn through a subset of suction elements for one of the vacuum regions 117 (such as suction elements 151 that are only in close proximity to or cover the target surface to be clamped). As shown in fig. 5, for example, the suction elements 151 within the boundary 119 (shown in phantom) may be used to grip a corresponding rounded surface of a parcel.
When all vacuum regions 117 are active, end effector 140 may provide a substantially uniform clamping force along each of clamping interfaces 121 or the entire bottom surface 223. In some embodiments, the bottom surface 223 is a substantially continuous and substantially uninterrupted surface, and the distance or spacing between suction elements 151 of adjacent vacuum regions 117 may be less than, equal to, or greater than (e.g., 2 times, 3 times, 4 times, etc.) the spacing between suction elements 151 of the same vacuum region 117. End effector 140 may be configured to hold or secure one or more objects via an attractive force, such as by creating and maintaining a vacuum between vacuum region 117 and the object. For example, end effector 140 may include one or more vacuum regions 117 configured to contact a surface of a target object and create/maintain a vacuum state in a space between vacuum regions 117 and the surface. A vacuum state may be created when end effector 140 is lowered via robotic arm 139, thereby pressing vacuum region 117 against a surface of a target object and squeezing or otherwise removing gas between the opposing surfaces. As the robot 139 lifts the end effector 140, the pressure differential between the space inside the vacuum region 117 and the ambient environment may keep the target object attached to the vacuum region 117. In some embodiments, the airflow rate through vacuum region 117 of end effector 140 may be dynamically adjusted, or adjusted based on the contact area between the target object and the contact or gripping surface of vacuum region 117 to ensure that a gripping force sufficient to securely grip the target object is achieved. Similarly, the airflow rate through vacuum region 117 may be dynamically adjusted to accommodate the weight of the target object, such as increasing the airflow for heavier objects to ensure that a clamping force sufficient to securely clamp the target object is achieved. An exemplary pumping element is discussed in connection with fig. 15.
FIG. 6 is a functional block diagram of transfer component 104 in accordance with one or more embodiments of the present technology. Processing unit 150(PU) may control the movement and/or other actions of robot arm system 132. PU150 may receive image data from a sensor (e.g., sensor 161 of imaging system 160 of fig. 3), sensor 145 of vision sensor device 143, or other sensor or detector capable of collecting image data (including video, still images), lidar data, radar data, or a combination thereof. In some embodiments, the image data may indicate or represent a Surface Image (SI) of the package 112.
PU150 may include any electronic data processing unit that executes software or computer instruction code, which may be stored permanently or temporarily in memory 152, a digital memory storage device, or a non-transitory computer readable medium including, but not limited to, Random Access Memory (RAM), an optical disk drive, magnetic storage, Read Only Memory (ROM), a Compact Disc (CD), solid state memory, a secure digital card, and/or a compact flash card. PU150 may be driven by execution of software or computer instruction code containing algorithms developed for the specific functionality embodied herein. In some embodiments, PU150 may be an Application Specific Integrated Circuit (ASIC) customized for the embodiments disclosed herein. In some embodiments, PU150 may include one or more of a microprocessor, a Digital Signal Processor (DSP), a Programmable Logic Device (PLD), a Programmable Gate Array (PGA), and a signal generator; however, for embodiments herein, the term "processor" is not limited to such exemplary processing units, and its meaning is not intended to be construed narrowly. For example, PU150 may also include more than one electronic data processing unit. In some embodiments, PU150 may be a processor used by or in conjunction with any other system of robotic system 100, including but not limited to robotic arm system 130, end effector 140, and/or imaging system 160. The PU150 of fig. 6 and the processor 202 of fig. 2 may be the same component or different components.
PU150 may be electronically coupled (e.g., via wires, buses, and/or wireless connections) to systems and/or sources to facilitate receipt of input data. In some embodiments, the operable coupling may be considered interchangeable with the electronic coupling. Direct connection is not needed; rather, such receipt of input data and provision of output data may be provided over a bus, over a wireless network, or as signals received and/or transmitted by PU150 via a physical or virtual computer port. PU150 may be programmed or configured to perform the methods discussed herein. In some embodiments, PU150 may be programmed or configured to receive data from various systems and/or units including, but not limited to, imaging system 160, end effector 140, and the like. In some embodiments, PU150 may be programmed or configured to provide output data to various systems and/or units.
The imaging system 160 may include one or more sensors 161 configured to capture image data representative of a package (e.g., the package 112 located on the destacking platform 110 of fig. 3). In some embodiments, the image data may represent visual designs and/or indicia appearing on one or more surfaces from which the registration status of the package may be determined. In some embodiments, the sensor 161 is a camera configured to operate within a target (e.g., visible and/or infrared) electromagnetic spectrum bandwidth and to detect light/energy within the corresponding spectrum. In some camera embodiments, the image data is a set of data points forming a point cloud, a depth map, or a combination thereof captured from one or more three-dimensional (3D) cameras and/or one or more two-dimensional (2D) cameras. From these cameras, a distance or depth between the imaging system 160 and one or more exposed (e.g., line of sight with respect to the imaging system 160) surfaces of the package 112 may be determined. In some embodiments, the distance or depth may be determined by using one or more image recognition algorithms, such as one or more contextual image classification algorithms and/or one or more edge detection algorithms. Once determined, the parcel may be manipulated via the robotic arm system using the distance/depth value. For example, the PU150 and/or robotic arm system may use the distance/depth values to calculate a location from which the package may be lifted and/or gripped. It should be noted that data described herein (such as image data) may include any analog or digital signal, discrete or continuous, that may contain information or indicate information.
The imaging system 160 may include at least one display unit 164 configured to present operational information (e.g., status information, settings, etc.), images of one or more packages 112 captured by the sensors 162, or other information/output that may be viewed by one or more operators of the robotic system 100, as discussed in detail below. Additionally, the display unit 164 may be configured to present other information, such as, but not limited to, symbols representing targeted packages, non-targeted packages, registered packages, and/or unregistered package instances.
Visual sensor device 143 may communicate with PU150 via a wired and/or wireless connection. The vision sensor 145 may be a video sensor, a CCD sensor, a lidar sensor, a radar sensor, a ranging or detection device, or the like. The output from the vision sensor device 143 may be used to generate a representation, such as a digital image and/or a point cloud, of one or more parcels for implementing machine/computer vision (e.g., for automated inspection, robotic guidance, or other robotic applications). The field of view (e.g., horizontal and/or vertical FOVs of 30 degrees, 90 degrees, 120 degrees, 150 degrees, 180 degrees, 210 degrees, 270 degrees) and the ranging capabilities of the vision sensor device 143 may be selected based on the configuration of the gripper assembly 141. (FIG. 4 shows an exemplary horizontal FOV of about 90 degrees). In some embodiments, vision sensor 145 is a lidar sensor having one or more light sources (e.g., lasers, infrared lasers, etc.) and an optical detector. The optical detector may detect light emitted by the light source and light reflected by the surface of the package. The presence and/or distance of the package may be determined based on the detected light. In some embodiments, the sensor 145 may scan an area, such as substantially all of the vacuum clamping area (e.g., the vacuum clamping area 125 of fig. 4). For example, the sensor 154 may include one or more deflectors that move to deflect the emitted light through the detection zone. In some embodiments, sensor 154 is a scanning laser-based lidar sensor capable of vertical and/or horizontal scanning (such as 10 ° lidar scanning, 30 ° lidar scanning, 50 ° lidar scanning, etc.). The configuration, FOV, sensitivity, and output of the sensor 145 may be selected based on the desired detection capabilities. In some embodiments, the sensors 145 may include a presence/distance detector (e.g., a radar sensor, a lidar sensor, etc.) and one or more cameras, such as three-dimensional or two-dimensional cameras. The distance or depth between the sensor and one or more surfaces of the package may be determined using, for example, one or more image recognition algorithms. The display unit 147 may be used to view image data, view sensor status, perform calibration routines, view logs and/or reports or other information or data, such as, but not limited to, symbols representing targeted, non-targeted, registered and/or unregistered instances of the package 112.
To control the robotic system 100, the PU150 may use output from one or both of the sensor 145 and the sensor 161. In some embodiments, the images output from the sensor 161 are used to determine an overall transfer plan, including the order for transporting the objects. The images output from sensor 145 as well as sensor 205 (e.g., force detector assembly) can be used to position the multi-gripper assembly relative to the object, confirm object pickup and monitor the transport step.
With continued reference to fig. 6, RDS 170 may include any database and/or memory storage device (e.g., non-transitory computer-readable medium) configured to store registration records 172 for a plurality of packages 112, data 173 for vacuum holders. For example, RDS 170 may include Read Only Memory (ROM), Compact Disk (CD), solid state memory, secure digital cards, compact flash cards, and/or a data storage server or remote storage device.
In some embodiments, registration records 172 may each include physical characteristics or attributes of the corresponding parcel 112. For example, each registration record 172 may include, but is not limited to, one or more templates SI, visual data (e.g., reference radar data, reference lidar data, etc.), 2D or 3D dimensional measurements, weight, and/or center of mass (CoM) information. The template SI may represent known or previously determined visible characteristics of the package including the design, indicia, appearance, external shape/contour of the package, or a combination thereof. The 2D or 3D dimensional measurements may include the length, width, height, or a combination thereof, of a known/expected parcel.
In some embodiments, RDS 170 may be configured to receive a new instance of registration record 172 (e.g., for a previously unknown parcel and/or a previously unknown aspect of a parcel) created in accordance with embodiments disclosed below. Thus, the robotic system 100 may automate the process of registering a package 112 by expanding the number of registration records 172 stored in the RDS 170, thereby making destacking operations more efficient and fewer unregistered instances of the package 112. By dynamically (e.g., during operation/deployment) updating registration record 172 in RDS 170 using real-time/operational data, robotic system 100 may effectively implement a computer learning process that may account for previously unknown or unexpected conditions (e.g., lighting conditions, unknown orientation, and/or stack inconsistencies) and/or recently encountered packages. Thus, the robotic system 100 may reduce failures due to "unknown" conditions/packages, associated operator intervention, and/or associated task failures (e.g., package loss and/or collision).
RDS 170 may include vacuum gripper data 173 including, but not limited to, characteristics or attributes including the number of addressable vacuum regions, the carrying capacity of the vacuum gripper device (e.g., multi-gripper assembly), vacuum protocols (e.g., vacuum level, gas flow rate, etc.), or other data for controlling robot arm system 130 and/or end effector 140. The operator may enter information about the vacuum grippers mounted in the robotic arm system 130. RDS 170 then identifies vacuum gripper data 173 corresponding to the vacuum gripper device for operation. In some embodiments, robotic arm 139 automatically detects a vacuum gripper device (e.g., gripper assembly 141 of fig. 3), and RDS 170 is used to identify information about the detected vacuum gripper device. The identified information can be used to determine the settings of the vacuum gripper apparatus. Thus, a different vacuum gripper apparatus or multi-gripper assembly will be installed and used with the robotic arm system 130.
End effector
Fig. 7 is a front isometric top view of a portion of an end effector 140 in accordance with one or more embodiments of the present technique. Fig. 8 is a front isometric bottom view of the end effector 140 of fig. 7. Referring now to fig. 7, end effector 140 may include a mounting interface or bracket 209 ("mounting bracket 209") and a force detector assembly 205 coupled to bracket 209 and gripper assembly 141. Fluid line 207 may be fluidly coupled to a pressurizing device, such as vacuum source 221 (not shown in fig. 8) and gripper assembly 141.
The FOV (variable or fixed FOV) of the vision sensor device 143 is directed generally below the gripper assembly 141 to provide detection of any object carried below the gripper assembly 141. Vision sensor devices 143 may be positioned along the perimeter of end effector 140 such that vision sensor devices 143 are below a substantially horizontal plane of one or more of vacuum regions 117 (one identified) and more specifically the gripping surfaces of gripping interface 121 (one identified). The term "substantially horizontal" generally refers to an angle within about +/-2 degrees from horizontal (e.g., within about +/-1 degrees from horizontal, such as within about +/-0.7 degrees from horizontal). Typically, the end effector 140 includes multiple vacuum regions 117 that enable the robotic system 100 to grip target objects that otherwise would not be gripped by a single instance of the vacuum regions 117. However, due to the larger size of end effector 140 relative to end effector 140 having a single instance of vacuum region 117, the detection sensor will occlude a larger area. As one advantage, the vision sensor device 143 positioned below the horizontal plane of the gripping interface 121 may provide the vision sensor device 143 with a FOV that includes the gripping interface 121 during initial contact with objects, including target objects, that is typically occluded for other instances of the vision sensor device 143 (instances that are not attached to the end effector 140 or positioned in different locations in the operating environment of the robotic system 100). Thus, during the clamping operation, the unobstructed FOV may provide real-time imaging sensor information for the robotic system, which may enable real-time or immediate adjustment of the position and motion of the end effector 140. As another advantage, the proximity between the visual sensor device 143 positioned below the horizontal plane of the gripping interface 121 and the objects (e.g., the non-target objects 112a, 112b of fig. 3) improves precision and accuracy during the gripping operation, which may protect or prevent the end effector 140 from damaging the target object 112 and the non-target objects 112a, 112b adjacent to the target object, e.g., by crushing the objects.
For illustrative purposes, the vision sensor device 143 may be positioned on a corner of the end effector 140 along the width of the effector, however, it should be understood that the vision sensor device 143 may be positioned differently. For example, vision sensor device 143 may be centered on the width or length of end effector 140. As another example, the vision sensor device 143 may be positioned at another corner or other location along the length of the actuator.
Vacuum source 221 (fig. 7) may include, but is not limited to, one or more pressurization devices, pumps, valves, or other types of devices capable of providing negative pressure, drawing a vacuum (including a partial vacuum), or creating a pressure differential. In some embodiments, the air pressure may be controlled with one or more regulators, such as a regulator between the vacuum source 221 and the gripper assembly 141 or a regulator in the gripper assembly 141. When the vacuum source 221 performs vacuum suction, air may be drawn (indicated by arrows in fig. 8) into the bottom 224 of the gripper assembly 141. The pressure level may be selected based on the size and weight of the object to be carried. If the vacuum level is too low, gripper assembly 141 may not be able to pick up one or more target objects. If the vacuum level is too high, the outside of the package may be damaged (e.g., the outer plastic bag-containing package may be torn due to the high vacuum level). According to some embodiments, the vacuum source 221 may provide a vacuum level of approximately 100mBar, 500mBar, 1,000mBar, 2,000mBar, 4,000mBar, 6,000mBar, 8,000mBar, etc. In alternative embodiments, higher or lower vacuum levels are provided. In some embodiments, the vacuum level may be selected based on the desired clamping force. At a vacuum level (e.g., 25%, 50%, or 75% of max vac level (i.e., the max vac level of the vacuum source 221)), the vacuum clamping force of each region 117 may be equal to or greater than about 50N, 100N, 150N, 200N, or 300N. These clamping forces can be achieved when picking up cartons, plastic bags or other packages suitable for transport. Different vacuum levels may be used, including when transporting the same or different objects. For example, a relatively high vacuum may be provided to first clamp the object. Once the package has been gripped, the gripping force (and hence the vacuum level) required to continue holding the object may be reduced, and thus a lower vacuum level may be provided. While performing certain tasks, the clamping vacuum may be increased to maintain a secure clamp.
Force detector assembly 205 may include one or more sensors 203 (one shown) configured to detect a force indicative of a load carried by end effector 140. The detected measurements may include linear force measurements, moment measurements, pressure measurements, or a combination thereof along one or more axes of a coordinate system. In some embodiments, the sensor 203 may be an F-T sensor that includes a component having a six-axis force sensor configured to detect up to three-axis forces (e.g., forces detected along the x, y, and z axes of a cartesian coordinate system) and/or three-axis moments (e.g., moments detected around the x, y, and z axes of a cartesian coordinate system). In some embodiments, the sensor 203 may include built-in amplifiers and microcomputers for signal processing, the ability to make static and dynamic measurements, and/or the ability to detect instantaneous changes based on sampling intervals. In some embodiments of a reference cartesian coordinate system, one or more force measurements along one or more axes (i.e., F (x-axis), F (y-axis), and/or F (z-axis)) and one or more moment measurements about one or more axes (i.e., M (x-axis), M (y-axis), and/or M (z-axis)) may be captured via sensors 203. By applying the CoM calculation algorithm, the weight of the package, the location of the package, and/or the number of packages may be determined. For example, the weight of the package may be calculated from the one or more force measurements, and the CoM of the package may be calculated from the one or more force measurements and the one or more moment measurements. In some embodiments, the weight of the package is calculated from one or more force measurements, package location information from the visual sensor device 143, and/or grip information (e.g., to achieve a location sealed by one or more packages). In some embodiments, the sensors 203 may be communicatively coupled with a processing unit (e.g., PU150 of fig. 6) via wired and/or wireless communication.
In some embodiments, output readings from both the force detector assembly 205 and the vision sensor device 143 may be used. For example, the relative position of the object may be determined based on the output from the vision sensor device 143. The output of the force detector component 205 can then be used to determine information about each object, such as the weight/mass of each object. The force detector component 205 may include a contact sensor, pressure sensor, force sensor, strain gauge, piezoresistive/piezoelectric sensor, capacitive sensor, elastic resistance sensor, torque sensor, linear force sensor, or other tactile sensor, all configured to measure a characteristic associated with direct contact between multiple physical structures or surfaces. For example, the force detector component 205 can measure a characteristic corresponding to the clamping force of the end effector on the target object, or measure the weight of the target object. Accordingly, the force detector component 205 can output a contact metric representing a quantified metric, such as a measured force or torque corresponding to a degree of contact or attachment between the gripper and the target object. For example, the contact metric may include one or more force or torque readings associated with the force applied by the end effector on the target object. Output from the force detector assembly 205 or other detector integrated with or attached to the end effector 140. For example, sensor information from the contact sensor (such as weight or weight distribution of the target object based on the force-torque sensor information) in combination with imaging sensor information (such as size of the target object) may be used by the robotic system to determine the identity of the target object, such as by automatic registration or an automated object registration system.
Fig. 9 is an exploded isometric view of a gripper assembly 141 in accordance with one or more embodiments of the present technique. The holder assembly 141 includes a housing 260 and an inner assembly 263. The housing 260 may surround and protect the internal components and may define an opening 270 configured to receive at least a portion of the force detector assembly 205. The inner assembly 263 can include a holder support assembly 261 ("support assembly 261"), a manifold assembly 262, and a plurality of holders 264a, 264b, 264c (collectively, "holders 264"). The bracket assembly 261 can hold each of the vacuum grippers 264, which can be fluidically coupled in series or parallel with a fluid line (e.g., fluid line 207 of fig. 7) via the manifold assembly 262, as discussed in connection with fig. 10 and 11. In some embodiments, bracket assembly 261 includes an elongate support 269 and brackets 267 (one identified) that couple holder 264 to elongate support 269. The gripper assembly 141 may include a suction element, a sealing member (e.g., a sealing panel), and other components discussed in connection with fig. 13-15.
Fig. 10 and 11 are rear isometric and plan views, respectively, of components of a gripper assembly in accordance with one or more embodiments of the present technique. Manifold assembly 262 may include gripper manifolds 274a, 274b, 274c (collectively "manifolds 274") coupled to respective grippers 264a, 264b, 264 c. For example, manifold 274a controls the gas flow associated with gripper 264 a. In some embodiments, the manifold 274 may be connected in parallel or in series to a pressurized source, such as the vacuum source 221 of fig. 7. In other embodiments, each manifold 274 may be fluidly coupled to a separate pressurization device.
The manifold 274 may be operated to distribute vacuum to one, some, or all of the grippers 264. For example, manifold 274a may be in an open state to allow air to flow through the bottom of clamp 264 a. Air flows through manifold 274a and exits the vacuum gripper assembly via a line (such as line 207 of fig. 7). The other manifolds 274b, 274c may be in a closed state to prevent suction at the manifolds 274b, 274 c. Each manifold 274a may include, but is not limited to, one or more lines connected with each of the suction elements. In other embodiments, the suction elements of the grippers 264a are connected to an internal vacuum chamber. Gripper manifold 274 may include, but is not limited to, one or more lines or channels, valves (e.g., check valves, shut-off valves, three-way valves, etc.), cylinders, regulators, orifices, sensors, and/or other components capable of controlling fluid flow. Each manifold 274 may be used to distribute suction forces evenly or unevenly to a suction element or groups of suction elements to produce a uniform or non-uniform vacuum clamping force. Electronic circuitry may communicatively couple the manifold 274 to the controller to provide power to and control the components of the module and its components. In one embodiment, the separate manifold 274 may include a common interface and plug for use with the common interface and plug, which may enable quick and easy addition and removal of the manifold 274 and components, thereby facilitating system reconfiguration, maintenance, and/or repair.
The number, arrangement, and configuration of grippers can be selected based on the desired number of addressable vacuum regions. Fig. 12 is an isometric view of internal components of a vacuum gripper assembly 300 (housing not shown) suitable for use with the environment of fig. 1-2 and transfer assembly 141 of fig. 3-6, in accordance with one or more embodiments of the present technique. The vacuum gripper assembly 300 may include six vacuum grippers 302 (one identified) in a generally rectangular arrangement. In other embodiments, the grippers may be in a circular arrangement, a square arrangement, or other suitable arrangement, and may have similar or different configurations. The holder may have other shapes including, but not limited to, oval, non-polygonal, etc. The gripper may include a suction element (e.g., a suction tube, suction cup, sealing member, etc.), a sealing member, a valve plate, a gripper mechanism, and other fluidic components for providing gripping capability.
One or more of the sensors, visual sensor devices, and other components discussed in connection with fig. 1-11 may be incorporated into or used with the vacuum gripper assembly 300. The suction element, sealing member and other components are discussed in connection with fig. 13-15.
The vacuum grippers may be arranged in series. For example, the vacuum grippers may be arranged one after the other in a 1 x 3 configuration, which provides two lateral gripping positions and one central gripping position. However, it should be understood that the end effector may include different numbers of vacuum grippers, rows of aspiration channels, or vacuum regions in different configurations from one another. For example, the end effector may include four vacuum grippers or rows of aspiration channels arranged in a 2 x 2 configuration. The vacuum region may have a width dimension that is the same as or similar to the length dimension so as to have a symmetrical square shape. As another example, the end effector may include a different number of vacuum regions, such as two vacuum regions or more than three vacuum regions having the same or different length and/or width dimensions from one another. In yet another example, the vacuum grippers may be arranged in various configurations, such as a 2 x 2 configuration with four vacuum regions, a 1: 2 configuration including five vacuum grippers, or other geometric arrangements and/or configurations.
Fig. 13 illustrates a multi-gripper assembly 400 ("gripper assembly 400") suitable for use with a robotic system (e.g., robotic system 100 of fig. 1-2) in accordance with some embodiments of the present technique. Fig. 14 is an exploded view of the gripper assembly 400 of fig. 13. Gripper assembly 400 may be any gripper or gripper assembly configured to grip a parcel from a fixed location (e.g., a fixed location on an unstacking platform, such as platform 110 of fig. 3). The gripper assembly apparatus 400 may include a gripper mechanism 410 and a contact or sealing member 412 ("sealing member 412"). The gripper mechanism 410 includes a body 414 and a plurality of suction elements 416 (one identified in fig. 14), each configured to pass through one opening 418 (one identified in fig. 14) of the member 412. When assembled, each of the suction elements 416 may extend partially or completely through the corresponding opening 418. For example, the suction element 416 may extend through the first side 419 toward the second side 421 of the sealing member 412.
Fig. 15 is a partial cross-sectional view of the sealing member 412 and the suction element 416. The suction element 416 may be in fluid communication with a line (e.g., line 422 of fig. 14) via a vacuum chamber and/or an internal conduit 430. A valve 437 (e.g., check valve, relief valve, etc.) may be positioned along the air flow path 436. The sensor 434 may be positioned to detect the vacuum level and may be in communication with a controller (e.g., the controller 109 of fig. 1) or a processing unit (e.g., the processing unit 150 of fig. 6) via a wired or wireless connection. The lower end 440 of the suction element 416 may include, but is not limited to, a suction cup or another suitable feature for forming a desired seal (e.g., a substantially airtight seal or other suitable seal) with a surface of an object. When the lower end 440 is near or in contact with an object, the object may be pulled against the sealing member 412 as air is drawn into the port/inlet 432 ("inlet 432") of the suction element 416 (as indicated by the arrows). The air flows up along flow path 426 and through passage 433 of suction element 416. Air may flow through the valve 437 and into the conduit 430. In some embodiments, the conduit 430 may be connected to a vacuum chamber 439. For example, some or all of the pumping elements 416 may be connected to the vacuum chamber 439. In other embodiments, different sets of pumping elements 416 may be in fluid communication with different vacuum chambers. As shown, the suction element 416 may have a corrugated or rippled configuration to allow axial compression without restricting the airflow passage 433 therein. The configuration, height, and size of the suction element 416 may be selected based on the desired amount of compressibility.
The sealing member 412 may be made, in whole or in part, of a compressible material configured to deform to accommodate surfaces having different geometries, including high profile surfaces. The sealing member 412 may be made, in whole or in part, of foam, including closed cell foam (e.g., foam rubber). The material of the sealing member 412 may be porous to allow a small amount of air flow (i.e., air leakage) to avoid applying a high negative pressure that may, for example, damage a package such as a plastic bag.
Operation process
Fig. 16 is a flow diagram of a method 490 for operating a robotic system according to one or more embodiments of the present disclosure. Generally, a transport robot may receive image data representing at least a portion of a pickup environment. The robot system may identify the target object based on the received image data. The robotic system may be secured to the identified target object using a vacuum gripper assembly. The different units, components, and subcomponents of the robotic system 100 of fig. 1 may perform method 490. Details of method 490 are discussed in detail below.
At block 500, the robotic system 100 may receive image data representing at least a portion of an environment. For example, the received image data may represent at least a portion of the stack 165 in the pick-up environment 163 of FIG. 3. The image data may include, but is not limited to, video, still images, lidar data, radar data, barcode data, or combinations thereof. In some embodiments, for example, the sensor 161 of fig. 3 may capture (e.g., via a wired or wireless connection) video or still images transmitted to a computer or controller, such as the controller 109 of fig. 1 and 6.
At block 502, the computer 109 (fig. 1) may analyze the image data to identify a target object in a set of objects, a pile of objects, or the like. For example, controller 109 may identify individual objects based on the received image data and the surface image/data stored by RDS 170 (fig. 6). In some embodiments, information from the drop location is used to select the target object. For example, the target object may be selected based on the amount of space available at the drop location, a preferred stacking arrangement, and the like. The user may enter selection criteria to determine the order of object pick-up. In some embodiments, a map of the pickup environment (e.g., pickup environment 163 of fig. 3) may be generated based on the received image data. In some mapping protocols, edge detection algorithms are used to identify edges, surfaces, etc. of objects. The mapping may be analyzed to determine which objects in the pickup area can be transported together. In some embodiments, a group of objects that can be simultaneously lifted and carried by the vacuum gripper is identified as target objects.
The robotic system 100 of fig. 1 may select a target parcel or object 112 from the source objects as a target for the task to be performed. For example, the robotic system 100 may select the target object to pick according to a predetermined sequence, a set of rules, a template of object contours, or a combination thereof. As a particular example, the robotic system 100 may select a target parcel as an instance of a source parcel available to the end effector 140, such as the instance of the source parcel 112 located on top of a stack of source parcels, from a point cloud/depth map representing distances and locations relative to known locations of image devices. In another particular example, the robotic system 100 may select the target object as an instance of the source package 112 located at a corner or edge and having two or more surfaces exposed to the end effector 140 or available to the end effector. In another particular example, the robotic system 100 may select the target object according to a predetermined pattern, such as from left to right or from closest to farthest relative to a reference location, without or with minimal interference or movement with other instances of the source parcel.
At block 504, the controller 109 may select a vacuum gripper or region for gripping the target object. For example, the controller 109 (fig. 1) may select the vacuum region 117a (fig. 4) to hold the parcel 112, as shown in fig. 3, because the entire parcel 112 (i.e., the target object) is substantially directly below the vacuum region 117 a. Vacuum suction will be performed through substantially all of the suction elements 151 (e.g., at least 90%, 95%, 98% of the suction elements 151) of the vacuum region 117a of fig. 4.
At block 506, the controller 109 generates one or more commands for controlling the robotic system 100. In some modes of operation, the commands may cause the robotic system to draw air at the identified or selected addressable vacuum regions. For example, the controller 109 may generate one or more pick-up commands to cause a vacuum source (e.g., vacuum source 221 of fig. 7) to provide a vacuum at a selected vacuum level. The vacuum level may be selected based on the weight or mass of one or more target objects, the task to be performed, and the like. Commands may be sent to the gripper assembly 141 to cause the manifold 262 to operate to provide suction at selected areas or grippers. Feedback from the vision sensor device 143 (fig. 7) can be used to monitor the pick and transfer process.
At block 508, the vision sensor device 143 may be used to verify the position of the end effector 140 relative to an object, including a source or target object, such as the package 112 of fig. 1. Visual sensor device 143 may be used to continuously or periodically monitor the relative position of end effector 140 with respect to an object before and during object pick-up, during object transport, and/or during and after object launch. The output from the vision sensor device 143 may also be used to count objects (e.g., count the number of target or source objects) or otherwise analyze objects, including analyzing stacks of objects. The vision sensor device 143 may also be used to obtain environmental information for navigating the robotic system 100.
At block 510, the controller 109 generates commands to cause actuators (e.g., the actuators 212), motors, servos, actuators, and other components of the robotic arm 139 to move the gripper assembly 141. The transfer command may be generated by the robotic system to cause the robotic transport arm to robotically move the gripper assembly 141 carrying the object between the locations. A transport command may be generated based on a transport plan that includes a transport path to deliver the object to the drop location without causing the object to impact another object. A vision sensor device 143 (fig. 7) may be used to avoid collisions.
The method 490 may be performed to clamp multiple target objects. The end effector 140 may be configured to grip multiple instances of a target parcel or object of the source parcel or object. For example, robotic system 100 may generate instructions for end effector 140 to engage multiple instances of vacuum region 117 to perform a gripping operation to simultaneously grip multiple instances of a target object. As a particular example, end effector 140 may be used to execute instructions for gripping operations for gripping multiple instances of a target object individually and sequentially one after another. For example, the instructions may include performing a gripping operation using one of the rows of suction channels 117 to grip a first instance of the target object 112 in one pose or one orientation, and then, if desired, repositioning the end effector 140 to engage a second or different instance of the vacuum region 117 to grip a second instance of the target object. In another particular example, end effector 140 may be used to execute instructions for simultaneously gripping separate instances of a target object. For example, end effector 140 may be positioned to contact two or more instances of the target object simultaneously and engage each of the corresponding instances of vacuum region 117 to perform a gripping operation on each of the multiple instances of the target object. In the above embodiment, each of the suction channel rows 117 may be independently operated as needed to perform different clamping operations.
Fig. 17 is a flow diagram of a method 700 for operating the robotic system 100 of fig. 1 according to a base plan, in accordance with one or more embodiments of the present technique. The method 700 includes steps that may be incorporated into the method 490 of fig. 16 and may be implemented based on execution of instructions stored on one or more of the storage devices 204 of fig. 2 with one or more of the processors 202 of fig. 2 or the controller 109 of fig. 6. The data captured by the vision sensor device, as well as the sensor output, may be used at various steps of method 700, as described in detail below.
At block 702, the robotic system 100 may interrogate (e.g., scan) one or more designated areas, such as a pick-up area and/or a drop area (e.g., a source drop area, a destination drop area, and/or a transport drop area). In some embodiments, the robotic system 100 may use (e.g., via commands/prompts sent by the processor 202 of fig. 2) one or more of the imaging device 222 of fig. 2, the sensors 161 and/or 145 of fig. 6, or other sensors to generate imaging results for one or more specified regions. The imaging results may include, but are not limited to, captured digital images and/or point clouds, object location data, and the like.
At block 704, the robotic system 100 may identify the target package 112 of fig. 1 and an associated location (e.g., the start location 114 of fig. 1 and/or the task location 116 of fig. 1). In some embodiments, for example, the robotic system 100 (e.g., via the processor 202) may analyze the imaging results according to a pattern recognition mechanism and/or a set of rules to identify an object contour (e.g., a perimeter edge or surface). The robotic system 100 may further identify the groupings of object contours as corresponding to each unique instance of the object (e.g., according to predetermined rules and/or a pose template). For example, the robotic system 100 may identify groupings of object contours that correspond to patterns in color, brightness, depth/position, or a combination thereof (e.g., the same value or varying at a known rate/pattern) in a contour line (object line). Also, for example, the robotic system 100 may recognize groupings of object contours from predetermined shape/pose templates defined in the master data.
The robotic system 100 may select an object from the recognized objects in the pickup location (e.g., according to a predetermined sequence or set of rules and/or templates of object contours) as the targeted parcel 112. For example, the robotic system 100 may select one or more target packages 112 as one or more objects located on top, such as from a point cloud representing distances/locations relative to known locations of sensors. Also, for example, the robotic system 100 may select the target package 112 as one or more objects located at corners/edges and having two or more surfaces exposed/shown in the imaging results. Available vacuum grippers and/or zones may also be used to select the target package. Further, the robotic system 100 may select the target package 112 according to a predetermined pattern (e.g., from left to right, from closest to farthest, etc. relative to a reference location).
In some embodiments, the end effector 140 may be configured to grasp multiple instances of the target package 112 from the source package. For example, robotic system 100 may generate instructions for end effector 140 to engage multiple instances of gripper region 117 to perform a gripping operation to simultaneously grip multiple instances of target package 112. As a particular example, the end effector 140 may be used to execute instructions for gripping operations for gripping multiple instances of the target package 112 individually and sequentially one after another. For example, the instructions may include performing a gripping operation using one of the gripper regions 117 to grip a first instance of the target package 112 in one pose or one orientation, and then, if necessary, repositioning the end effector 140 to engage a second or different instance of the gripper region 117 to grip a second instance of the target package 112. In another particular example, end effector 140 may be used to execute instructions for simultaneously gripping separate instances of target package 112. For example, end effector 140 may be positioned to contact two or more instances of target package 112 simultaneously and engage each of the corresponding instances of gripper region 117 to perform a gripping operation on each of the multiple instances of target package 112. In the above embodiments, each of the gripper regions 117 may be independently operated as needed to perform different gripping operations.
For the selected target package 112, the robotic system 100 may further process the imaging results to determine a starting position 114 and/or an initial pose. For example, the robotic system 100 may determine the initial pose of the target package 112 based on selecting one of a plurality of predetermined pose templates (e.g., different potential arrangements of object contours according to corresponding orientations of the objects) that corresponds to the lowest difference metric when compared to the grouping of object contours. Moreover, the robotic system 100 may determine the starting location 114 by translating the location of the target package 112 in the imaging results (e.g., a predetermined reference point for determining the pose) to a location in a grid used by the robotic system 100. The robotic system 100 may translate the position according to a predetermined calibration map.
In some embodiments, the robotic system 100 may process the imaging results of the drop zone to determine the open space between the objects. The robotic system 100 may determine the open space based on mapping the outline lines according to a predetermined calibration map that converts the image positions to real positions and/or coordinates used by the system. The robotic system 100 may determine the open space as the space between the outline lines (and thus the object surfaces) belonging to different groupings/objects. In some embodiments, the robotic system 100 may determine an open space suitable for the target package 112 based on measuring one or more dimensions of the open space and comparing the measured dimensions to one or more dimensions of the target package 112 (e.g., stored in the master data). The robotic system 100 may select one of the appropriate/open spaces as the task location 116 according to a predetermined pattern (e.g., left-to-right, closest-to-farthest, bottom-to-top, etc. with respect to a reference location).
In some embodiments, the robotic system 100 may determine the task location 116 without or in addition to processing the imaging results. For example, the robotic system 100 may place an object in a placement area according to a predetermined sequence of actions and positions without imaging the area. In addition, a sensor (e.g., vision sensor device 143) attached to the vacuum gripper assembly 141 may output image data for periodically imaging the area. The imaging result may be updated based on the additional image data. Also, for example, the robotic system 100 may process the imaging results to perform multiple tasks (e.g., transfer multiple objects, such as objects located on a common layer/stack of the stack).
At block 706, the robotic system 100 may calculate a base plan for the target package 112. For example, the robotic system 100 may calculate a base motion plan based on calculating a command or set sequence, or a combination thereof, for the actuation device 212 of fig. 2 that will operate the robotic system 132 and/or end effector of fig. 3 (e.g., the end effector 140 of fig. 3-5). For some tasks, the robotic system 100 may calculate sequences and settings that will manipulate the robotic system 132 and/or the end effector 140 to transfer the target package 112 from the start location 114 to the task location 116. The robotic system 100 may implement a motion planning mechanism (e.g., a process, a function, an equation, an algorithm, a computer-generated/readable model, or a combination thereof) configured to compute a path in space according to one or more constraints, objectives, and/or rules. For example, the robotic system 100 may use a predetermined algorithm and/or other grid-based search to calculate a through-space path for moving the target package 112 from the start location 114 to the task location 116. The motion planning mechanism may use other processes, functions or equations and/or translation tables to translate the path into a sequence of commands or settings for the actuator 212, or a combination thereof. When using the motion planning mechanism, the robotic system 100 may calculate a sequence that will operate the robotic arm 206 (fig. 3) and/or end effector 140 (fig. 3) and cause the target package 112 to follow the calculated path. The vision sensor device 143 can be used to identify any obstacles and recalculate the path and refine the base plan.
At block 708, the robotic system 100 may begin executing the base plan. The robotic system 100 may begin executing the base motion plan based on operating the actuation device 212 according to a command or set sequence, or a combination thereof. The robotic system 100 may perform a first set of actions in the base motion plan. For example, the robotic system 100 may operate the actuation device 212 to place the end effector 140 at the calculated position and/or orientation about the starting position 114 to grasp the target package 112, as shown in block 752.
At block 754, the robotic system 100 may analyze the position of the object using sensor information obtained before and/or during the gripping operation (e.g., information from the vision sensor device 143, the sensor 216, the force detector assembly 205), such as the weight of the target package 112, the centroid of the target package 112, the relative position of the target package 112 with respect to the vacuum region, or a combination thereof. The robotic system 100 may operate the actuation device 212 and the vacuum source 221 (fig. 7) to cause the end effector 140 to engage and grip the target package 112. Image data from the vision sensor device 143 and/or data from the force sensor assembly 205 may be used to analyze the location and quantity of the target packages 112. At block 755, visual sensor device 143 may be used to verify the position of end effector 140 relative to target package 112 or other object. In some embodiments, as shown at block 756, the robotic system 100 may perform an initial lift by moving the end effector a predetermined distance upward. In some embodiments, the robotic system 100 may reset or initialize an iteration counter "i" for tracking the number of gripping actions.
At block 710, the robotic system 100 may measure the established clamping force. The robotic system 100 may measure the established clamping force based on readings from the force detector assembly 205, the visual sensor device 143, or other sensors of fig. 7, such as the pressure sensor 434 (fig. 15). For example, the robotic system 100 may determine the gripping characteristics by measuring forces, torques, pressures, or combinations thereof at one or more locations on the robotic arm 139, one or more locations on the end effector 140, or combinations thereof using one or more of the force detector assemblies 205 of fig. 3. In some embodiments, such as for the clamping force established by assembly 141, the contact or force measurements may correspond to the number, location, or combination of suction elements (e.g., suction elements 416 of fig. 14) that contact the surface of target package 112 and maintain a vacuum therein. Additionally or alternatively, the clamping characteristics may be determined based on output from the visual sensor device 143. For example, image data from sensor detector 143 may be used to determine whether an object is moving relative to end effector 140 during transport.
At decision block 712, the robotic system 100 may compare the measured grip force to a threshold (e.g., an initial grip force threshold). For example, the robotic system 100 may compare the contact or force measurements to a predetermined threshold. The robotic system 100 may also compare image data from the detector 143 to reference image data (e.g., image data captured at the time of initial object pickup) to determine whether the gripped objects have moved, for example, relative to each other or relative to the gripper assembly 141. Thus, the robotic system 100 may determine whether the contact/grip force is sufficient to continue manipulating (e.g., lifting, transferring, and/or reorienting) one or more target packages 112.
As shown at decision block 714, when the measured gripping force fails to meet the threshold, the robotic system 100 may evaluate whether an iteration count for re-gripping one or more target packages 112 has reached an iteration threshold. When the iteration count is less than the iteration threshold, the robotic system 100 may deviate from the base motion plan if the contact or force measurements fail to meet (e.g., are below) the threshold. Accordingly, at block 720, the robotic system 100 may operate the robotic arm 139 and/or the end effector 140 to perform re-gripping actions not included in the base motion plan. For example, the re-gripping action may include a predetermined command or set sequence, or a combination thereof, for actuating the device 212, which will cause the robotic arm 139 to lower the end effector 140 (e.g., upon reversing the initial lift) and/or cause the end effector 140 to release one or more target packages 112 and re-grip one or more target packages 112. In some embodiments, the predetermined sequence may also manipulate the robotic arm 139 to adjust the position of the gripper after releasing the target object and before re-gripping the target object or changing the area where vacuum suction is applied. While performing the re-gripping action, the robotic system 100 may pause the execution of the base motion plan. After performing the re-grip action, the robotic system 100 may increment an iteration count.
After re-gripping the object, the robotic system 100 may measure the established gripping force as described above for block 710 and evaluate the established gripping force as described above for block 712. The robotic system 100 may attempt to re-clamp the target package 112 as described above until the iteration count reaches an iteration threshold. When the iteration count reaches the iteration threshold, the robotic system 100 may stop executing the base motion plan, as shown at block 716. In some embodiments, the robotic system 100 may request operator input, as shown at block 718. For example, the robotic system 100 may generate an operator notification (e.g., a predetermined message) via the communication device 206 of fig. 2 and/or the input/output device 208 of fig. 2. In some embodiments, the robotic system 100 may cancel or delete the base motion plan, record a predetermined status (e.g., error code) for the corresponding task, or perform a combination thereof. In some embodiments, as described above, the robotic system 100 may reinitiate the process by imaging the pick/task area (block 702) and/or identifying another item in the pick area as a target object (block 704).
When the measured gripping force (e.g., the measured gripping force of each held package) satisfies the threshold, the robotic system 100 may continue to perform the remainder/actions of the base motion plan, as shown at block 722. Similarly, when the contact metric satisfies the threshold after re-gripping the target package 112, the robotic system 100 may resume execution of the paused base motion plan. Thus, the robotic system 100 may continue to perform sequential actions (i.e., actions after gripping and/or initial lifting) in the basic motion plan by operating the actuation device 212 and/or transport motor 214 of fig. 2 according to the remaining command and/or setup sequence. For example, the robotic system 100 may transfer and/or reorient the target package 112 (e.g., vertically and/or horizontally) according to a base movement plan.
In executing the base motion plan, the robotic system 100 may track the current location and/or current orientation of the target package 112. The robotic system 100 may track the current position to position one or more portions of the robot arm and/or end effector based on the output from the position sensor 224 of fig. 2. In some embodiments, the robotic system 100 may track the current position by processing the output of the position sensor 224 with a computer-generated model, procedure, equation, position map, or a combination thereof. Thus, the robotic system 100 may combine the positions or orientations of the joints and structural members and further map the positions to a grid to calculate and track the current position 424. In some embodiments, the robotic system 100 may include multiple beacon sources. The robotic system 100 may measure beacon signals at one or more locations in the robot arm and/or end effector and use the measurements (e.g., signal strength, time stamp or propagation delay and/or phase shift) to calculate the separation distance between the signal source and the measured location. The robotic system 100 may map the separation distances to known locations of the signal sources and calculate the current location of the signal receiving location as the location where the mapped separation distances overlap.
At decision block 724, the robotic system 100 may determine whether the base plan has been completely executed to the end. For example, the robotic system 100 may determine whether all actions (e.g., commands and/or settings) in the base motion plan 422 have been completed. Also, when the current position matches the task position 116, the robotic system 100 may determine that the base motion plan is complete. When the robotic system 100 has completed executing the base plan, the robotic system 100 may reinitiate the process by imaging the pick/task area (block 702) and/or identifying another item in the pick area as a target object (block 704), as described above.
Otherwise, at block 726, the robotic system 100 may measure the gripping force during transfer of the target package 112 (i.e., by determining the contact/force measurements). In other words, the robotic system 100 may determine contact/force measurements while executing the base motion plan. In some embodiments, the robotic system 100 may determine the contact/force measurements based on a sampling frequency or at a predetermined time. In some embodiments, the robotic system 100 may determine the contact/force measurements before and/or after performing a predetermined number of commands or settings with the actuation device 212. For example, the robotic system 100 may sample the contact sensor 226 after or during a particular class of manipulation (such as for lifting or rotation). Also, for example, the robotic system 100 may sample the contact sensor 226 when the direction and/or magnitude of the accelerometer output matches or exceeds a predetermined threshold indicative of sudden or rapid movement. The robotic system 100 may determine contact/force measurements using one or more of the processes described above (e.g., for block 710).
In some embodiments, robotic system 100 may determine the orientation of the gripper and/or target package 112 and adjust the contact metric accordingly. The robotic system 100 may adjust the contact metric based on the orientation to account for a directional relationship between a sensing direction of the contact sensor and a gravitational force applied to the target object according to the orientation. For example, the robotic system 100 may calculate an angle between the sensing direction and a reference direction (e.g., a "downward" or gravitational direction) as a function of the orientation. The robotic system 100 may scale or multiply the contact/force measurements according to factors and/or signs corresponding to the calculated angles.
At decision block 728, the robotic system 100 may compare the measured grip force to a threshold (e.g., a transfer grip force threshold). In some embodiments, the transfer grip force threshold may be less than or equal to an initial grip force threshold associated with evaluating an initial (e.g., prior to transfer) grip force on the target package 112. Thus, the robotic system 100 may implement more stringent rules to evaluate the gripping force before initiating transfer of the target package 112. The threshold requirement for the gripping force may initially be high because contact sufficient to pick up the target package 112 may be sufficient to divert the target package 112.
When the measured gripping force satisfies (e.g., is not less than) the threshold and the correct package is gripped (e.g., determined based on image data from the vision sensor device 143), the robotic system 100 may proceed to execute the base plan as shown at block 722 and described above. When the measured gripping force fails to meet (e.g., is less than) the threshold or the correct package is not gripped, the robotic system 100 may deviate from the base motion plan and perform one or more responsive actions, as shown at block 530. When the measured clamping force is insufficient based on the threshold, the robotic system 100 may operate the robotic arm 139, the end effector, or a combination thereof, based on commands and/or settings not included in the base motion plan. In some embodiments, the robotic system 100 may perform different commands and/or settings based on the current location.
For purposes of illustration, responsive actions will be described using controlled delivery. However, it should be understood that other actions may be performed by the robotic system 100, such as by stopping execution of the base motion plan as indicated at block 716 and/or by requesting operator input as indicated at block 718.
Controlled delivery includes one or more actions for placing the targeted package 112 in one of the delivery areas (e.g., rather than the task location 116) in a controlled manner (i.e., based on lowering and/or releasing the targeted package 112, rather than as a result of a complete failure of gripping). In performing a controlled delivery, the robotic system 100 may dynamically (i.e., in real time and/or while executing a base motion plan) calculate different positions, maneuvers or paths and/or actuator commands or settings as a function of the current position. In some embodiments, end effector 140 may be configured for grip-release operations for multiple instances of target package 112. For example, in some embodiments, end effector 140 may be configured to simultaneously or sequentially perform a grip-release operation by selectively disengaging vacuum region 117 as needed to release each instance of target parcel 112 accordingly. The robotic system 100 may select whether to release the object and the good simultaneously or sequentially or based on the location of the object being held, the arrangement of the object in the drop zone, etc.
At block 762, the robotic system 100 may calculate an adjusted drop position and/or associated pose for placing the target package 112. In calculating the adjusted drop position, the robotic system 100 may identify a drop zone (e.g., a source drop zone, a destination drop zone, or a transport drop zone) that is closest to and/or ahead of the current position (e.g., between the current position and the task position). Likewise, when the current position is between the drop zones (i.e., not in the drop zone), the robotic system 100 may calculate a distance from the drop zone (e.g., a distance from a representative reference position of the drop zone). Thus, the robotic system 100 may identify the drop zone closest to and/or in front of the current location. Based on the identified drop zone, the robotic system 100 may calculate a position therein as an adjusted drop position. In some embodiments, the robotic system 100 may calculate an adjusted drop position based on selecting positions according to a predetermined order (e.g., left-to-right, bottom-to-top, and/or front-to-back with respect to a reference position).
In some embodiments, the robotic system 100 may calculate a distance from the current location to an open space within the drop zone (e.g., as identified in block 704 and/or tracked according to the ongoing object placement). The robotic system 100 may select an open space in front of and/or closest to the current location 424 as the adjusted drop location.
In some embodiments, prior to selecting a drop zone and/or open space, the robotic system 100 may use a predetermined process and/or equation to convert the amount of contact/force to a maximum transfer distance. For example, the predetermined process and/or equation may estimate the corresponding maximum transfer distance and/or duration before complete failure of the clamp based on various values of the contact metric. Thus, the robotic system 100 may filter out available drop zones and/or open spaces that are further than the maximum transfer distance from the current location. In some embodiments, when the robotic system 100 fails to identify an available drop zone and/or open space (e.g., when the available drop zone is full), the robotic system 100 may stop executing the base movement plan, as shown at block 716, and/or request operator input, as shown at block 718.
At block 766, the robotic system 100 may calculate an adjusted movement plan to transfer the target package 112 from the current location to the adjusted drop off location. The robotic system 100 may calculate an adjusted motion plan in a manner similar to that described above for block 506.
At block 768, the robotic system 100 may execute the adjusted motion plan in addition to and/or in place of the base motion plan. For example, the robotic system 100 may operate the actuation device 212 according to a command or set sequence, or a combination thereof, thereby manipulating the robotic arm 139 and/or end effector to move the target package 112 according to the path.
In some embodiments, the robotic system 100 may pause execution of the base motion plan and execute the adjusted motion plan. In some embodiments, once target package 112 is placed at the adjusted drop location based on executing the adjusted movement plan (i.e., completing the execution of the controlled drop), robotic system 100 may attempt to re-grip target package 112 as described above for block 720 and then measure the established gripping force as described above for block 710. In some embodiments, the robotic system 100 may attempt to re-clamp the target package 112 up to the iteration limit as described above. If the contact metric satisfies the initial clamp force threshold, the robotic system 100 may reverse the adjusted motion plan (e.g., return to the pause point/position) and continue executing the remainder of the paused base motion plan. In some embodiments, the robotic system 100 may update and recalculate the adjusted motion plan from the current position 424 (after re-clamping) to the task position 116 and execute the adjusted motion plan to complete the execution task.
In some embodiments, the robotic system 100 may update an area log (e.g., a record of open space and/or placed objects) of the delivery area utilized to reflect the placed target package 112. For example, the robotic system 100 may regenerate the imaging results for the corresponding drop zone. In some embodiments, the robotic system 100 may cancel the remaining actions of the base motion plan after performing the controlled delivery and placing the target package 112 at the adjusted delivery location. In one or more embodiments, the transport launch area may include a tray or basket placed on top of one of the transport units 106 of fig. 1. At a specified time (e.g., when a tray/basket is full and/or when an incoming tray/basket is delayed), the corresponding transport unit may be transferred from the launch area to the pick-up area. Thus, the robotic system 100 may re-implement the method 500, thereby re-identifying the delivered items as target packages 112 and transferring them to the corresponding task locations 116.
Once the target package 112 has been placed at the adjusted drop position, the robotic system 100 may repeat the method 700 for a new target object. For example, the robotic system 100 may determine the next object in the pick-up area as the target package 112, calculate a new base motion plan to transfer the new target object, and so on.
In some embodiments, the robotic system 100 may include a feedback mechanism that updates the path computation mechanism based on the contact metrics 312. For example, as the robotic system 100 implements the action to re-grip the target package 112 having the adjusted position (e.g., as described above for block 720), the robotic system 100 may store the position of the end effector that produced the contact/force measurement that satisfies the threshold (e.g., as described above for block 712). The robotic system 100 may store the location in association with the targeted package 112. When the number of gripping failures and/or successful re-gripping actions reaches a threshold, the robotic system 100 may analyze the stored location for gripping the target package 112 (e.g., using a running window to analyze the most recent set of actions). When a predetermined number of re-gripping actions occur for a particular object, the robotic system 100 may update the motion planning mechanism to place the gripper at a new position (e.g., a position corresponding to the maximum number of successes) relative to the target package 112.
Based on the operations represented in block 710 and/or block 726, the robotic system 100 (e.g., via the processor 202) may track progress in executing the base motion plan. In some embodiments, the robotic system 100 may track progress based on horizontal transfer of one or more target packages 112. The robotic system 100 may track progress based on measuring the established clamping force before initiating the horizontal transfer (block 710) and based on measuring the clamping force during the transfer after initiating the horizontal transfer (block 726). Thus, the robotic system 100 may selectively generate a new set of actuator commands, actuator settings, or a combination thereof (i.e., as opposed to a base motion plan) based on the progress as described above.
In other embodiments, for example, the robotic system 100 may track progress based on tracking commands, settings, or a combination thereof that have been communicated to and/or implemented by the actuation devices 212. Based on the schedule, the robotic system 100 may selectively generate a new set of actuator commands, actuator settings, or a combination thereof to perform a re-grip responsive action and/or a controlled release responsive action. For example, when the progress is before any horizontal transfer of the target package 112, the robotic system 100 may select an initial grip force threshold and perform the operation represented in block 712 (e.g., via a function call or jump instruction) and execute forward. Also, when the progress is after the horizontal transfer of the target package 112, the robotic system 100 may select the transfer pinch force threshold and perform the operation represented in block 728 (e.g., via a function call or jump instruction) and execute forward.
Implementing granular control/manipulation of target parcel 112 (i.e., selecting to implement or deviate from a base motion plan) from contact/force measurements and vision-based monitoring via imaging data from vision sensor device 143 improves the efficiency, speed, and accuracy of transferring objects. For example, re-gripping a target package 112 when the contact metric is below the initial grip force threshold or the package 112 is improperly positioned may reduce the likelihood of a grip failure occurring during transfer, which reduces the number of objects that are lost or inadvertently dropped during transfer. The vacuum region and vacuum level may be adjusted to maintain a desired gripping force to further enhance handling of the package 112. In addition, each missing object requires manual intervention to correct the consequences (e.g., removing the missing object from the path of motion for subsequent tasks, checking whether the missing object is damaged, and/or completing a task for the missing object). Thus, reducing the number of lost objects reduces the manpower required to perform the task and/or the entire operation.
Fig. 18-21 illustrate stages of gripping and transporting an object in a robotic fashion according to the method 490 of fig. 16 or the method 700 of fig. 17 according to one or more embodiments of the present disclosure. Fig. 18 shows gripper assembly 141 positioned over the stack of objects. The robotic arm 139 may position the gripper assembly 141 directly over the target object. The controller may analyze the image data from the vision sensor device 143 to identify, for example, target objects 812a, 812b, as discussed at block 704 of fig. 17. A plan (e.g., a pick-up plan or a base plan) may be generated based on the collected image data. The plan may be generated based on (a) the carrying capacity of the gripper assembly 141 and/or (b) the configuration of the target object.
Figure 19 shows the lower surface of the gripper assembly 141 covered with target objects 812a, 812b and a large non-target object 818. The output from the vision sensor device 143 can be analyzed to confirm the position of the gripper assembly 141 relative to the target object. Based on the position of the objects 812a, 812b, the vacuum regions 117a, 117b are identified for vacuum suction. In some embodiments, the readings from the force sensor 203 are used to confirm that the gripper assembly 141 has contacted the upper surface of the stack 814 before and/or after gripping the target object 812a, 812 b.
Fig. 20 shows air being drawn into vacuum regions 117a, 117b, as indicated by the arrows, to hold target objects 812a, 812b on gripper assembly 141 without vacuum (or substantial vacuum) suction at the other vacuum region 117 c. The vacuum level may be increased or decreased to increase or decrease compression of one or more flexible panels 412 (one identified). The vacuum clamping force may be evaluated as discussed in connection with block 710 of fig. 17.
Fig. 21 shows the raised gripper assembly 141 securely holding the target objects 812a, 812 b. The vision sensor device 143 may be used to monitor the position of the target objects 812a, 812 b. Additionally or alternatively, the force detector assembly 205 may be used to determine information about the load, such as the position and weight of the target objects 812a, 812 b. The vacuum regions 117a, 117b may continue to draw in air to securely hold the target objects 812a, 812 b. As discussed at block 726 of fig. 17, the vacuum clamping force may be monitored during the transfer. The applied vacuum may be stopped or reduced to release the objects 812a, 812 b. This process may be repeated to transfer each of the objects in the stack.
Conclusion
The above detailed description of examples of the disclosed technology is not intended to be exhaustive or to limit the disclosed technology to the precise form disclosed above. While specific examples of the disclosed technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosed technology, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative embodiments may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a number of different ways. Also, while processes or blocks are sometimes shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Moreover, any specific numbers mentioned herein are merely examples; alternate embodiments may use different values or ranges.
These and other changes can be made to the disclosed technology in light of the above detailed description. While the detailed description describes certain examples of the disclosed technology, as well as the best mode contemplated, no matter how detailed the above appears in text, the disclosed technology can be practiced in many ways. The details of the system may vary widely in its specific embodiments while still being encompassed by the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosed technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosed technology with which that terminology is associated. Accordingly, the invention is not limited except as by the appended claims. In general, unless terms used in the following claims are explicitly defined in the detailed description section above, such terms should not be construed to limit the disclosed technology to the specific examples disclosed in the specification.
While certain aspects of the invention are presented below in certain claim forms, applicants contemplate the various aspects of the invention in any number of claim forms. Accordingly, applicants reserve the right to pursue additional claims after filing the application to pursue such additional claim forms.

Claims (20)

1. A method for operating a transport robot including a gripper assembly, the method comprising:
calibrating the gripper assembly based on a calibration plate having a pattern for calibrating a position of the gripper assembly in an operating environment relative to a visual sensor device attached to the gripper assembly; wherein the calibration plate is attached to the vision sensor device; wherein the vision sensor device is positioned below a horizontal plane of a gripping interface of the gripper assembly to provide the vision sensor device with a field of view of the gripping interface including the gripper assembly at least during initial contact with an object;
receiving an image captured by the vision sensor device, wherein the captured image represents a set of objects within a gripping region of the gripper assembly;
identifying one or more target objects in the group based on the received images;
selecting at least one of a plurality of independently selectable vacuum regions of the gripper assembly based on the one or more target objects; wherein the plurality of independently selectable vacuum regions form a bottom of the gripper assembly and are configured to draw air to hold the target object with the gripper assembly;
generating commands and/or settings to
Gripping the one or more target objects using a selected at least one of the plurality of independently selectable vacuum regions; and
robotically moving the gripper assembly to transport the gripped one or more target objects away from other objects in the set.
2. The method of claim 1, wherein the received image comprises lidar data, radar data, video, still images, or a combination thereof.
3. The method of claim 1, further comprising vacuum suctioning by a suction element of the plurality of independently selectable vacuum regions, the suction element positioned to hold the one or more target objects via a vacuum gripping force.
4. The method of claim 1, wherein identifying the one or more target objects comprises
Mapping at least a portion of a pickup area using the received image, wherein the group is at the pickup area; and
analyzing the mapping to determine which objects at the pick-up area are capable of being transported together by the gripper assembly.
5. The method of claim 1, further comprising:
determining a set of objects in the set that can be simultaneously lifted and carried by the gripper assembly; and is
Wherein identifying the one or more target objects comprises selecting one or more objects from the group as the one or more target objects.
6. The method of claim 1, further comprising:
determining whether at least one object in the group is near or at the clamping zone, an
In response to determining that the at least one object in the set is near or at the clamping zone,
causing the vision sensor device to capture image data of the at least one object in the set, an
Determining a position of the at least one object in the group.
7. The method of claim 1, further comprising:
generating a pickup plan based on the received image;
transporting the objects in the group based on the pick plan using the gripper assembly; and
monitoring the transport of the objects in the group using the visual sensor device.
8. The method of claim 1, further comprising:
generating a first command to cause the gripper assembly to reach a pickup position based on the identified position of the one or more target objects, an
Generating a second command to cause vacuum to be drawn via a selected at least one of the plurality of independently selectable vacuum regions overlying the identified one or more target objects without vacuum drawing through other of the independently selectable vacuum regions overlying a possible non-target object in the group.
9. A robotic transport system, comprising:
a robotic device;
an end effector coupled to the robotic device and comprising
A gripper assembly comprising
A plurality of independently selectable vacuum zones, an
A manifold assembly configured to fluidly couple each of the independently selectable vacuum regions to at least one vacuum line such that each independently selectable vacuum region is capable of independently providing suction to hold a target object while the robotic device moves the gripper assembly carrying the target object; and
a vision sensor device attached to the gripper assembly and positioned to capture image data representative of the target object carried by the gripper assembly, wherein the vision sensor device comprises a calibration plate for calibrating a position of the vision sensor device relative to the gripper assembly; wherein the vision sensor device is positioned below a horizontal plane of a gripping interface of the gripper assembly to provide the vision sensor device with a field of view of the gripping interface including the gripper assembly at least during initial contact with an object; and
a controller programmed to cause selected ones of the independently selectable vacuum regions to hold the target object based on the captured image data, wherein each of the independently selectable vacuum regions comprises a plurality of suction elements configured for vacuum gripping.
10. An end effector, comprising:
a gripper assembly comprising
A plurality of independently selectable vacuum zones defining a vacuum clamping zone, an
A manifold assembly configured to fluidly couple each of the independently selectable vacuum regions to at least one vacuum line such that each independently selectable vacuum region is capable of independently providing suction to hold a target object as the gripper assembly carrying the target object is moved by a robotic device; and
a vision sensor device carried by the gripper assembly and configured to capture image data representative of at least a portion of the vacuum gripping zone, the vision sensor device comprising a calibration plate for calibrating a position of the vision sensor device relative to the gripper assembly; wherein the vision sensor device is positioned below a horizontal plane of a gripping interface of the gripper assembly to provide the vision sensor device with a field of view of the gripping interface including the gripper assembly at least during initial contact with an object.
11. The end effector as set forth in claim 10 wherein said gripper assembly includes
A plurality of suction elements fluidly coupled to the manifold assembly, an
A faceplate through which the plurality of suction elements extend such that when air is drawn into the suction elements, the target object is drawn against the faceplate to increase the vacuum clamping force provided by the gripper assembly.
12. The end effector of claim 11, wherein the captured image data represents the target object carried by the gripper assembly, and the vision sensor device is positioned to detect the presence of one or more objects that may be present held against the panel by the gripper assembly, and the captured image data includes lidar data, radar data, video, still images, or a combination thereof.
13. The end effector of claim 10, wherein the vision sensor device is positioned to the side of the vacuum gripping region such that there is no obstruction below the vacuum gripping region when the gripping interface of the gripper assembly is in a substantially horizontal orientation.
14. The end effector as set forth in claim 10 wherein
The vision sensor device is configured to output the captured image data to identify one or more target objects in a group, and
the gripper assembly is configured to grip and carry the identified one or more target objects away from other objects in the group.
15. The end effector of claim 10, wherein the captured image data includes data enabling identification of multiple objects spaced apart from one another.
16. The end effector of claim 10, wherein the gripper assembly is configured to hold the target object using selected ones of the independently selectable vacuum regions based on the captured image data, wherein each of the independently selectable vacuum regions comprises a plurality of suction elements configured for vacuum gripping.
17. The end effector of claim 10, wherein the end effector is configured to create a pressure differential at an area of the vacuum gripping region corresponding to the target object to selectively grip the target object.
18. The end effector of claim 10, wherein the vision sensor device has a field of view extending across the vacuum grip region to capture image data representative of the vacuum grip region.
19. The end effector of claim 10, wherein the vision sensor device is configured to capture image data representing a gap between objects located directly below the gripper assembly.
20. The end effector of claim 10, wherein the end effector is configured to be fluidly coupled to an external vacuum source such that each of the independently selectable vacuum regions is fluidly coupled to the external vacuum source via the at least one vacuum line.
CN202010769547.3A 2019-08-21 2020-07-24 Robotic multi-gripper assembly and method for gripping and holding objects Active CN111993448B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201962889562P 2019-08-21 2019-08-21
US62/889,562 2019-08-21
US16/855,751 US11345029B2 (en) 2019-08-21 2020-04-22 Robotic multi-gripper assemblies and methods for gripping and holding objects
US16/855,751 2020-04-22
CN202010727610.7A CN112405570A (en) 2019-08-21 2020-07-24 Robotic multi-gripper assembly and method for gripping and holding objects

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010727610.7A Division CN112405570A (en) 2019-08-21 2020-07-24 Robotic multi-gripper assembly and method for gripping and holding objects

Publications (2)

Publication Number Publication Date
CN111993448A CN111993448A (en) 2020-11-27
CN111993448B true CN111993448B (en) 2022-02-08

Family

ID=73466351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010769547.3A Active CN111993448B (en) 2019-08-21 2020-07-24 Robotic multi-gripper assembly and method for gripping and holding objects

Country Status (1)

Country Link
CN (1) CN111993448B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112871682B (en) * 2020-12-08 2022-10-04 梅卡曼德(上海)机器人科技有限公司 Express delivery package supply system, method, equipment and storage medium
US11911801B2 (en) 2020-12-11 2024-02-27 Intelligrated Headquarters, Llc Methods, apparatuses, and systems for automatically performing sorting operations
CN112657860A (en) * 2021-01-15 2021-04-16 佛山科学技术学院 Automatic queuing system and queuing method
CN112799085A (en) * 2021-02-02 2021-05-14 苏州威达智电子科技有限公司 Ceramic disc transfer mechanism with disc edge profile measurement function
CN113387132B (en) * 2021-05-12 2023-09-12 合肥欣奕华智能机器股份有限公司 Substrate operation platform and control method thereof
CN114535143A (en) * 2022-01-27 2022-05-27 阿丘机器人科技(苏州)有限公司 Logistics goods sorting method, device, equipment and storage medium
CN115220400B (en) * 2022-03-04 2023-04-11 弥费科技(上海)股份有限公司 Wafer transfer-based supervisory control method, system, computer equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7017961B1 (en) * 2004-08-06 2006-03-28 Bakery Holdings Llc Compressive end effector
JP4903627B2 (en) * 2007-04-24 2012-03-28 Juki株式会社 Surface mounter and camera position correction method thereof
JP2010005769A (en) * 2008-06-30 2010-01-14 Ihi Corp Depalletizing apparatus and depalletizing method
JP2014176926A (en) * 2013-03-14 2014-09-25 Yaskawa Electric Corp Robot system and method for conveying work
JP6692247B2 (en) * 2016-08-04 2020-05-13 株式会社東芝 Article holding device and article holding method
JP2018047515A (en) * 2016-09-20 2018-03-29 株式会社東芝 Robot hand device and transportation device using robot hand device
US10207868B1 (en) * 2016-12-06 2019-02-19 Amazon Technologies, Inc. Variable compliance EOAT for optimization of GCU
US10902377B2 (en) * 2018-01-24 2021-01-26 Amazon Technologies, Inc. Robotic item handling using a variable area manipulator
CN109129544A (en) * 2018-08-04 2019-01-04 安徽派日特智能装备有限公司 A kind of robot hand picking up vehicle-carrying DVD
CN109483554B (en) * 2019-01-22 2020-05-12 清华大学 Robot dynamic grabbing method and system based on global and local visual semantics
CN112405570A (en) * 2019-08-21 2021-02-26 牧今科技 Robotic multi-gripper assembly and method for gripping and holding objects

Also Published As

Publication number Publication date
CN111993448A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN113021401B (en) Robotic multi-jaw gripper assembly and method for gripping and holding objects
US11904468B2 (en) Robotic multi-gripper assemblies and methods for gripping and holding objects
CN111993448B (en) Robotic multi-gripper assembly and method for gripping and holding objects
CN110329710B (en) Robot system with robot arm adsorption control mechanism and operation method thereof
CN110465960B (en) Robot system with article loss management mechanism
US9205558B1 (en) Multiple suction cup control
US10766141B1 (en) Robotic system with a coordinated transfer mechanism
JP7175487B1 (en) Robotic system with image-based sizing mechanism and method for operating the robotic system
JP7126667B1 (en) Robotic system with depth-based processing mechanism and method for manipulating the robotic system
JP7264387B2 (en) Robotic gripper assembly for openable objects and method for picking objects
US20220332524A1 (en) Robotic multi-surface gripper assemblies and methods for operating the same
US20230071488A1 (en) Robotic system with overlap processing mechanism and methods for operating the same
CN115258510A (en) Robot system with object update mechanism and method for operating the robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant