CN115592691A - Robot system with gripping mechanism and related systems and methods - Google Patents

Robot system with gripping mechanism and related systems and methods Download PDF

Info

Publication number
CN115592691A
CN115592691A CN202211009384.4A CN202211009384A CN115592691A CN 115592691 A CN115592691 A CN 115592691A CN 202211009384 A CN202211009384 A CN 202211009384A CN 115592691 A CN115592691 A CN 115592691A
Authority
CN
China
Prior art keywords
target object
gripping
coupled
assembly
mechanical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211009384.4A
Other languages
Chinese (zh)
Inventor
雷磊
张艺轩
叶旭涛
徐熠
布兰登·蔻兹
鲁仙·出杏光
赖智立
黄国豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mujin Technology
Original Assignee
Mujin Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/885,366 external-priority patent/US20230052763A1/en
Application filed by Mujin Technology filed Critical Mujin Technology
Publication of CN115592691A publication Critical patent/CN115592691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators

Abstract

Robotic systems having variable gripping mechanisms and related systems and methods are disclosed herein. In some embodiments, the robotic system includes a robotic arm and an object gripping assembly coupled to the robotic arm. The object holding assembly may comprise: a main body coupled to the robotic arm by an external connector on an upper surface of the main body; and a vacuum operated clamping member coupled to a lower surface of the body. The object holder assembly can also include a variable width holder member coupled to the body. The variable width gripping member is movable between a fully collapsed state, a plurality of extended states, and a clamped state to grip a variety of target objects having different shapes, sizes, weights, and orientations.

Description

Robot system with gripping mechanism and related systems and methods
The application is a divisional application of Chinese application CN202210977869.6, which has a date of 8/15/2022, entitled "robot system with clamping mechanism and related system and method".
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application No. 63/232,663, filed on 8/13/2021, which is incorporated herein by reference in its entirety.
Technical Field
The present technology relates generally to robotic systems having gripper mechanisms, and more particularly to robotic systems having features for identifying target objects and adjusting gripper mechanisms based on the target objects.
Background
With the ever-increasing performance and decreasing cost of robots (e.g., machines configured to automatically/autonomously perform physical actions), many robots have now become widely used in many areas. For example, robots may be used to perform various tasks (e.g., spatially manipulate or transfer objects) during manufacturing and/or assembly, packaging and/or bagging, transportation and/or shipping, and the like. In performing a task, the robot may replicate human actions, replacing or reducing human involvement that would otherwise be required to perform a dangerous or repetitive task.
However, despite technological advances, robots often lack the advancement necessary to replicate the human interaction required to perform larger and/or more complex tasks. Accordingly, there remains a need for improved techniques and systems for managing operations and/or interactions between robots.
Drawings
Fig. 1 is an illustration of an example environment in which a robotic system having a gripper mechanism may operate, in accordance with some embodiments of the present technique.
Fig. 2 is a block diagram illustrating the robotic system of fig. 1, in accordance with some embodiments of the present technique.
Fig. 3 is an illustration of a robotic object holding system in accordance with some embodiments of the present technique.
Fig. 4 is a schematic side view of an object holding assembly, in accordance with some embodiments of the present technology.
Fig. 5A and 5B are isometric views of an object holding assembly in accordance with some embodiments of the present technology.
Fig. 6A-6C illustrate the object holding assembly of fig. 5A and 5B in various states, in accordance with some embodiments of the present technology.
Fig. 7 is a flow diagram of a process for operating a robotic system having an object gripping assembly, in accordance with some embodiments of the present technique.
Fig. 8A-8C are partial schematic views of an object clamping assembly 800 at various stages of a process for clamping a target object 802, in accordance with some embodiments of the present technique.
Fig. 9A-9C are partial schematic views of an object clamping assembly 900 at various stages of a process for clamping a target object 902, in accordance with some embodiments of the present technique.
Fig. 10A-10E are partially schematic illustrations of an object holding assembly 1000 at various stages of a process for holding a target object 1002, in accordance with some embodiments of the present technique.
Fig. 11 is an illustration of an object holding system 1100 in accordance with other embodiments of the present technique.
The drawings are not necessarily to scale. Similarly, some components and/or operations may be separated into different blocks or combined into a single block for the purpose of discussing some implementations of the technology. In addition, while the techniques may be subject to various modifications and alternative forms, specific implementations have been shown by way of example in the drawings and will be described in detail below. However, it is not intended to limit the techniques to the particular implementations described.
For ease of reference, the end effector and its components are sometimes described herein with reference to top and bottom, upper and lower, up and down, longitudinal plane, horizontal plane, x-y plane, vertical plane, and/or z plane with respect to the spatial orientation of the embodiment shown in the figures. However, it should be understood that the end effector and its components may be moved to and used in different spatial orientations without changing the structure and/or function of the embodiments of the present disclosure.
Detailed Description
Overview
Robotic systems having variable gripping mechanisms and related systems and methods are disclosed herein. In some embodiments, the robotic system includes a robotic arm and an object gripping assembly (e.g., a multi-function end effector) coupled to the robotic arm. The object holding assembly may be configured to selectively hold different types of objects, such as trays; packages, boxes, and/or other suitable objects for placement on the tray; and a slipsheet for placement over the tray and/or object. The object holding assembly may comprise: a main body coupled to the robotic arm by an external connector on an upper surface of the main body; and a vacuum operated gripping member (e.g., a wrap gripping portion) coupled to a lower surface of the body. The object clamp assembly can also include a variable width clamp member (e.g., a tray clamp portion, a slip sheet clamp portion, or a combination thereof) coupled to the body. The variable width gripping member is movable between a fully collapsed state, a plurality of extended states, and a clamped state to allow the object gripping assembly to engage and lift a variety of target objects having different shapes, sizes, weights, and orientations.
In some implementations, the variable width gripping member includes a linear extension mechanism coupled to the body, two rotation mechanisms coupled to opposite sides of the linear extension mechanism, and one or more mechanical grippers coupled to each of the rotation mechanisms. In the fully folded state, the linear extension mechanism is fully retracted and/or retracted to position the rotary members at a minimum distance apart. Further, the rotation mechanism is in a raised position, directing each of the mechanical grippers coupled to the rotation mechanism upward from the lower surface (e.g., vertical and/or partially vertical) of the body. When the object holding assembly is in the fully folded state, the vacuum operated holding member is positioned to define a lowermost surface of the object holding assembly and/or to engage with a target object and hold the target object at the lowermost surface using a suction force.
To enter the extended state, one or more arms in the linear extension mechanism may extend and/or extend to position the rotating components a greater distance apart. The extension may be adjusted based on one or more predetermined extension states (e.g., planned based on known widths of various target objects), based on one or more inputs (e.g., from robotic components, controllers, and/or human operators), based on one or more detected dimensions of the target object, and so forth. By way of example only, the object holding assembly may include: an imaging sensor coupled to the body and positioned to collect image data of the target object that is usable by a controller operatively coupled to the imaging sensor; said vacuum operated gripping member; and the variable width gripping member. The controller may be configured to: receiving the image data from the imaging sensor; determining which of the fully collapsed state and the plurality of extended states to use to clamp the target object; moving the variable width gripping member to the determined state; and controlling the vacuum operated gripping member and/or the variable width gripping member to grip the object. In various embodiments, the determination of which state to use may be based on the category of the target object, the orientation of the target object, a candidate gripping location on the target object, a measured size of the target object, and the like.
To enter the clamped state, the rotary member is actuated (e.g., rotated) to a lowered position to guide the mechanical gripper below the lower surface of the body. In the clamped state, the mechanical clamping component may engage and/or clamp the target object. In some embodiments, the variable width gripping members further comprise one or more pressure cylinders corresponding to each of the mechanical gripping members and positioned to press the target object against the mechanical gripping members. For example only, the mechanical gripping member may engage a lower surface of the target object while the pressure cylinder presses against an upper surface of the target object. Thus, the pressure cylinder may help stabilize the target object.
In some embodiments, the variable width gripping members further comprise one or more suction members coupled to the rotating member configured to engage an upper surface of various target object types not engaged by the vacuum operated gripping members and/or the mechanical gripping members. As the suction members are coupled to the rotary members, they are also movable between the fully collapsed state, various extended states and the clamped state.
For the sake of brevity, several details describing structures or processes that are well known and often associated with robotic systems and subsystems and that may unnecessarily obscure some important aspects of the disclosed technology are not set forth in the following description. Furthermore, while the following disclosure sets forth several embodiments of different aspects of the technology, several other embodiments may have configurations or components different from those described in this section. Accordingly, the disclosed technology may have other embodiments with additional elements or without several of the elements described below.
Many embodiments or aspects of the disclosure described below may take the form of computer-executable or controller-executable instructions, including routines executed by a programmable computer or controller. One skilled in the relevant art will appreciate that the disclosed techniques can be practiced on computers or controller systems other than those shown and described below. The techniques described herein may be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform one or more of the computer-executable instructions described below. Thus, the terms "computer" and "controller" as generally used herein refer to any data processor and may include internet appliances and hand-held devices, including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, minicomputers, and the like. The information processed by these computers and controllers may be presented on any suitable display medium, including a Liquid Crystal Display (LCD). Instructions for performing computer-or controller-executable tasks may be stored on or in any suitable computer-readable medium including hardware, firmware, or a combination of hardware and firmware. The instructions may be contained in any suitable memory device, including, for example, a flash drive, a USB device, and/or other suitable media.
The terms "coupled" and "connected," along with their derivatives, may be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct contact with each other. Unless otherwise apparent from the context, the term "coupled" may be used to indicate that two or more elements are in contact with each other, directly or indirectly (with other intervening elements between them), or that two or more elements cooperate or interact with each other (e.g., interact in a causal relationship, such as for signal transmission/reception or for function invocation), or both.
Exemplary Environment for a Robotic System
Fig. 1 is an illustration of an exemplary environment in which a robotic system 100 having an object handling mechanism may operate. The operating environment of the robotic system 100 may include one or more structures, such as robots or robotic devices, configured to perform one or more tasks. Aspects of the object handling mechanism may be practiced or implemented by various structures and/or components.
In the example shown in fig. 1, the robotic system 100 may include an unload unit 102, a transfer unit 104, a transport unit 106, a load unit 108, or a combination thereof in a warehouse, a distribution center, or a transport hub. Each of the units in the robotic system 100 may be configured to perform one or more tasks. The tasks may be combined in order to perform the operation to achieve the goal, such as unloading objects from a vehicle (such as a truck, trailer, van, or rail car) for storage in a warehouse, or unloading objects from a storage location and loading them onto a vehicle for shipment, for example. In another example, the tasks may include: an object is moved from one location (such as a container, bin, cage, basket, shelf, platform, tray, or conveyor) to another. Each of the units may be configured to perform a series of actions (such as operating one or more components therein) to perform a task.
In some embodiments, the tasks may include: interaction with the target object 112, such as manipulation, movement, reorientation of the object, or a combination thereof. The target object 112 is an object to be carried by the robot system 100. More specifically, the target object 112 may be a particular object of many objects that is a target of an operation or task of the robotic system 100. For example, the target object 112 may be an object that the robotic system 100 has selected or is currently being handled, manipulated, moved, reoriented, or a combination thereof. By way of example, the target object 112 may include a box, a tube, a package, a bundle, a wide variety of individual items, or any other object that may be handled by the robotic system 100.
As an example, the tasks may include: the target object 112 is transferred from the object source 114 to the task location 116. The object source 114 is a container for storing objects. The object sources 114 may include numerous configurations and forms. For example, the object source 114 may be a platform with or without walls on which objects, such as trays, racks, or conveyor belts, may be placed or stacked. As another example, the object source 114 may be a partially or fully enclosed container, such as a bin, cage, or basket, having a wall or lid in which the object may be placed. In some embodiments, the walls of the partially or fully enclosed object source 114 may be transparent, or may include openings or gaps of various sizes such that portions of the object contained therein may be visible or partially visible through the walls.
Fig. 1 shows examples of possible functions and operations that may be performed by the various units of the robotic system 100 to carry the target object 112, and it should be understood that the environments and conditions may differ from those described below. For example, the unloading unit 102 may be a vehicle unloading robot configured to transfer the target object 112 from a location in a vehicle (such as a truck) to a location on a conveyor. Furthermore, the transfer unit 104, such as a palletizing robot, may be configured to transfer the target objects 112 from a position on the conveyor belt to a position on the transport unit 106, such as for loading the target objects 112 on a pallet on the transport unit 106. In another example, transfer unit 104 may be a singlepicking robot configured to transfer target object 112 from one container to another container. Upon completion of the operation, the transport unit 106 may transfer the target object 112 from the area associated with the transfer unit 104 to the area associated with the loading unit 108, and the loading unit 108 may transfer the target object 112 from the transfer unit 104 to a storage location, such as a location on a shelf, such as by moving a tray carrying the target object 112.
In some embodiments, the robotic system 100 may include units (e.g., transfer units) configured to perform different tasks involving different target objects. For example, the robotic system 100 may include a transfer unit 104 configured to manipulate (via, for example, a multi-function end effector) packages, package containers (e.g., trays or bins), and/or support objects (e.g., slipsheets). The transfer unit 104 may be provided at a station having different target objects arranged around the transfer unit 104. The robotic system 100 may use a multi-function configuration to sequence and accomplish different tasks to achieve complex operations. Additionally or alternatively, the stations may be used to adapt or implement different types of tasks (e.g., wrapping/unwrapping objects from a shipping unit, stacking or grouping trays/pallets, etc.) depending on real-time requirements or conditions of the overall system 100. Details regarding the task and multifunction configuration are described below.
For illustrative purposes, the robotic system 100 is described in the context of a shipping center; however, it should be understood that the robotic system 100 may be configured to perform tasks in other environments or for other purposes (such as for manufacturing, assembly, packaging, healthcare, or other types of automation). It should also be understood that the robotic system 100 may include other units not shown in fig. 1, such as manipulators, service robots, modular robots. For example, in some embodiments, the robotic system 100 may include: an unstacking unit for transferring objects from a cage, cart or tray onto a conveyor or other tray; a container switching unit for transferring an object from one container to another container; a packing unit for packing an object; a sorting unit for grouping the objects according to one or more characteristics of the objects; an item picking unit to manipulate (such as sort, group, and/or transfer) objects differently according to one or more characteristics of the objects; or a combination thereof.
The robotic system 100 may include a controller 109 configured to interact with and/or control one or more of the robotic units. For example, the controller 109 may include circuitry (e.g., one or more processors, memory, etc.) configured to derive motion plans and/or corresponding commands, settings, etc. for operating corresponding robotic units. The controller 109 may communicate the motion plan, commands, settings, etc. to the robotic unit, and the robotic unit may execute the communicated plan to complete a corresponding task, such as transferring the target object 112 from the object source 114 to the task location 116.
Suitable system
Fig. 2 is a block diagram illustrating a robotic system 100 in accordance with one or more embodiments of the present technology. In some embodiments, for example, the robotic system 100 may include electronic devices, electrical devices, or a combination thereof, such as a control unit 202 (also sometimes referred to herein as a "processor"), a storage unit 204, a communication unit 206, a system input/output (I/O) device 208 having a system interface 210 (also sometimes referred to herein as a "user interface 210"), one or more actuation devices 212, one or more transport motors 214, one or more sensor units 216, or a combination thereof coupled to one another, integrated or coupled with one or more of the units or robots described above in fig. 1, or a combination thereof.
The control unit 202 may be implemented in a number of different ways. For example, the control unit 202 may be a processor, an Application Specific Integrated Circuit (ASIC), an embedded processor, a microprocessor, hardware control logic, a hardware Finite State Machine (FSM), a Digital Signal Processor (DSP), or a combination thereof. The control unit 202 may execute software and/or instructions to provide the intelligence of the robotic system 100.
The control unit 202 can be operatively coupled to a user interface 210 to provide control of the control unit 202 to a user. The user interface 210 may be used for communication between the control unit 202 and other functional units in the robotic system 100. The user interface 210 may also be used for communication external to the robotic system 100. The user interface 210 may receive information from other functional units or from external sources, or may transmit information to other functional units or to external destinations. External sources and external destinations refer to sources and destinations external to the robotic system 100.
The user interface 210 may be implemented in different ways and may comprise different implementations depending on which functional units or external units are interacting with the user interface 210. For example, the user interface 210 may be implemented with pressure sensors, inertial sensors, micro-electro-mechanical systems (MEMS), optical circuitry, waveguides, wireless circuitry, wired circuitry, application program interfaces, or a combination thereof.
The storage unit 204 may store software instructions, master data, trace data, or a combination thereof. For illustrative purposes, the memory cell 204 is shown as a single element, but it is understood that the memory cell 204 may be a distribution of memory elements. Also for illustrative purposes, the robotic system 100 is shown with the storage unit 204 as a single tier storage system, but it should be understood that the robotic system 100 may have a different configuration of the storage unit 204. For example, the storage unit 204 may be formed of different storage technologies, forming a memory hierarchy system including different levels of cache, main memory, rotating media, or offline storage.
The storage unit 204 may be a volatile memory, a non-volatile memory, an internal memory, an external memory, or a combination thereof. For example, the storage unit 204 may be a non-volatile storage device, such as non-volatile random access memory (NVRAM), flash memory, a magnetic disk storage device, or a volatile storage device, such as Static Random Access Memory (SRAM). As another example, the storage unit 204 may be a non-transitory computer medium including non-volatile memory such as a hard disk drive, NVRAM, a solid State Storage Device (SSD), a Compact Disc (CD), a Digital Video Disc (DVD), or a Universal Serial Bus (USB) flash memory device. The software may be stored on a non-transitory computer readable medium for execution by the control unit 202.
The storage unit 204 can be operatively coupled to a user interface 210. The user interface 210 may be used for communication between the storage unit 204 and other functional units in the robot system 100. The user interface 210 may also be used for communication external to the robotic system 100. The user interface 210 may receive information from other functional units or from external sources, or may transmit information to other functional units or to external destinations. External sources and external destinations refer to sources and destinations external to the robotic system 100.
Similar to the discussion above, the user interface 210 may include different implementations depending on which functional units or external units are interacting with the storage unit 204. The user interface 210 may be implemented with technology (technologies/technologies) similar to the implementation of the user interface 210 discussed above.
In some embodiments, the storage unit 204 is used to further store and provide access to processing results, predetermined data, thresholds, or combinations thereof. For example, the storage unit 204 may store master data that includes descriptions of one or more target objects 112 (e.g., boxes, box types, products, and/or combinations thereof). In one embodiment, the master data includes dimensions, predetermined shapes, templates for potential poses and/or computer-generated models for identifying different poses, color schemes, images, identifying information (e.g., barcodes, quick Response (QR) codes, logos, etc.), expected locations, expected qualities, and/or combinations thereof of one or more target objects 112 expected to be manipulated by the robotic system 100.
In some embodiments, the master data includes handling-related information about one or more objects that may be encountered or handled by the robotic system 100. For example, the manipulation-related information for the object may include a centroid location on each of the objects, expected sensor measurements (e.g., force, torque, pressure, and/or contact measurements) corresponding to one or more actions, manipulations, or a combination thereof.
The communication unit 206 may enable external communication to and from the robot system 100. For example, the communication unit 206 may enable the robotic system 100 to communicate with other robotic systems or units, external devices (such as external computers, external databases, external machines, external peripheral devices), or combinations thereof, via a communication path 218 (such as a wired or wireless network).
The communication path 218 may span and represent a variety of networks and network topologies. For example, the communication path 218 may include wireless communication, wired communication, optical communication, ultrasonic communication, or a combination thereof. Examples of wireless communications that may be included in communication path 218 are, for example, satellite communications, cellular communications, bluetooth, infrared data association standards (lrDA), wireless local area networks (WiFi), and Worldwide Interoperability for Microwave Access (WiMAX). Examples of wired communications that may be included in communications path 218 are cable, ethernet, digital Subscriber Line (DSL), fiber optic line, fiber To The Home (FTTH), and Plain Old Telephone Service (POTS). Additionally, the communication path 218 may traverse multiple network topologies and distances. For example, the communication path 218 may include a direct connection, a Personal Area Network (PAN), a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), or a combination thereof. The robotic system 100 may transmit information between the various units over the communication path 218. For example, information may be transmitted between the control unit 202, the storage unit 204, the communication unit 206, the I/O devices 208, the actuation devices 212, the transport motors 214, the sensor units 216, or a combination thereof.
The communication unit 206 may also function as a communication hub, thereby enabling the robotic system 100 to function as part of the communication path 218 and is not limited to being an endpoint or terminal unit of the communication path 218. The communication unit 206 may include active and passive components, such as microelectronics or an antenna, for interacting with the communication path 218.
The communication unit 206 may include a communication interface 248. The communication interface 248 may be used for communication between the communication unit 206 and other functional units in the robotic system 100. Communication interface 248 may receive information from other functional units or from external sources, or may transmit information to other functional units or to external destinations. External sources and external destinations refer to sources and destinations external to the robotic system 100.
Communication interface 248 may include different implementations depending on which functional units are interacting with communication unit 206. Communication interface 248 may be implemented with techniques similar to the implementation of control interface 240.
The I/O devices 208 may include one or more input sub-devices and/or one or more output sub-devices. Examples of input devices of I/O device 208 may include a keypad, touchpad, soft keys, keyboard, microphone, sensors for receiving remote signals, camera for receiving motion commands, or any combination thereof to provide data and communication input. Examples of output devices may include a display interface. The display interface may be any graphical user interface, such as a display, a projector, a video screen, and/or any combination thereof.
The control unit 202 may operate the I/O device 208 to present or receive information generated by the robotic system 100. The control unit 202 may operate the I/O device 208 to present information generated by the robotic system 100. The control unit 202 may also execute software and/or instructions for other functions of the robotic system 100. The control unit 202 may further execute software and/or instructions for interacting with the communication path 218 via the communication unit 206.
The robotic system 100 may include physical or structural members, such as robotic manipulator arms, connected at joints for movement, such as rotational displacement, translational displacement, or a combination thereof. The structural members and joints may form a kinematic chain configured to manipulate an end effector (such as a clamping element) to perform one or more tasks, such as clamping, spinning, or welding, depending on the use or operation of the robotic system 100. The robotic system 100 may include an actuation device 212, such as a motor, actuator, wire, artificial muscle, electroactive polymer, or a combination thereof, configured to drive, manipulate, displace, reorient, or a combination thereof, a structural member around or at a corresponding joint. In some embodiments, the robotic system 100 may include a transport motor 214 configured to transport the corresponding unit from one place to another.
The robotic system 100 may include a sensor unit 216 configured to obtain information for performing tasks and operations, such as information for manipulating structural members or for transporting the robotic unit. The sensor unit 216 may include a device configured to detect or measure one or more physical properties of the robotic system 100, such as a state, condition, location of one or more structural members or joints, information about an object or surrounding environment, or a combination thereof. As an example, the sensor unit 216 may include an imaging device, a system sensor, a contact sensor, and/or any combination thereof.
In some embodiments, the sensor unit 216 includes one or more imaging devices 222. The imaging device 222 is a device configured to detect and image the surrounding environment. For example, imaging device 222 may include a 2-dimensional camera, a 3-dimensional camera (both of which may include a combination of vision and infrared capabilities), a lidar, a radar, other ranging devices, and other imaging devices. The imaging device 222 may generate a representation (such as a digital image or point cloud) of the detected environment for implementing machine/computer vision for automated inspection, robotic guidance, or other robotic applications. As described in further detail below, the robotic system 100 may process the digital image, the point cloud, or a combination thereof via the control unit 202 to identify the target object 112, the pose of the target object 112, or a combination thereof of fig. 1. To manipulate the target object 112, the robotic system 100 may capture and analyze an image of a designated area (such as a pick-up location of an object inside a truck, inside a container, or on a conveyor belt) to identify the target object 112 of FIG. 1 and its object source 114. Similarly, the robotic system 100 may capture and analyze an image of another designated area (such as a drop location for placing objects on a conveyor belt, a location for placing objects inside a container, or a location on a pallet for stacking purposes) to identify the task location 116 of fig. 1.
In some embodiments, the sensor unit 216 may include a system sensor 224. The system sensors 224 may monitor the robotic cells within the robotic system 100. For example, system sensors 224 may include units or devices for detecting and monitoring the position of structural members (such as robotic arms and end effectors, corresponding joints of robotic units, or combinations thereof). As another example, the robotic system 100 may use the system sensors 224 to track the position, orientation, or a combination thereof of the structural members and joints during task performance. Examples of system sensors 224 may include accelerometers, gyroscopes, or position encoders.
In some embodiments, the sensor unit 216 may include a contact sensor 226, such as a pressure sensor, force sensor, strain gauge, piezoresistive/piezoelectric sensor, capacitive sensor, elastic resistance sensor, torque sensor, linear force sensor, other tactile sensor, and/or any other suitable sensor configured to measure a characteristic associated with direct contact between multiple physical structures or surfaces. For example, the contact sensor 226 may measure a characteristic corresponding to the end effector gripping on the target object 112 or measure the weight of the target object 112. Accordingly, the contact sensor 226 may output a contact metric representing a quantitative metric, such as a measured force or torque corresponding to a degree of contact or attachment between the clamping element and the target object 112. For example, the contact metric may include one or more force or torque readings associated with the force applied by the end effector to the target object 112.
Adaptive robotic object clamping system
Fig. 3 is an illustration of an object holding system 300 (e.g., a station for a multi-function unit) in accordance with some embodiments of the present technology. In the illustrated embodiment, object gripper system 300 includes a robotic arm 302 and an object gripper assembly 304 (also sometimes referred to herein as a "gripper assembly," "object gripper," "gripper," and/or "gripper head") coupled to robotic arm 302.
The object holding system 300 may be configured to pick up, hold, transport, release, load, and/or unload various types or classes of objects. For example, in the illustrated embodiment, the robotic arm 302 is positioned at the end of the conveyor belt 362, and the object gripper assembly 304 may be configured to grip at least different classes of objects that are distinguished by their size (e.g., length, width, height, etc.), their weight, gripping location, surface material, surface texture, availability of rigidity (or its default), and so forth. In the illustrated embodiment, the object holding component 304 may be configured to hold at least three categories of objects: (1) Various boxes 364 (e.g., shipping boxes, shipping units, packaging units, cartons, consumer goods, food items, etc.); (2) slip sheet 366; and (3) trays 368 to package shipping units 369 (e.g., palletized containers for large-scale distribution). By way of example only, the first category (i.e., the cassettes 364) generally consist of a relatively smaller length and width as compared to the second category (i.e., the slipsheets 366) and the third category (i.e., the trays 368); surfaces that can be joined by vacuum forces; and/or a relatively rigid exterior. In another example, the second category is generally comprised of a relatively larger length and width than the first category; a surface engageable by suction; and/or a material (e.g., a flexible material such as paper or cardboard) that may require a wide grip to remain rigid during transport. In another example, the second category is generally comprised of a relatively larger length and width than the first category; available clamping positions; and/or a rigid material (e.g., wood). Because each of the categories has different characteristics, these categories may require adjustment of the object clamping assembly 304 between tasks of picking, transporting, and/or placing objects in the various categories.
During packaging operations, the robotic arm 302 may use the multi-function object gripping assembly 304 to sequentially accomplish a variety of tasks (e.g., different pick, transfer, and place tasks). The robotic arm 302 may combine different types of tasks to achieve a complete operation (e.g., packaging the shipping unit with the appropriate objects) at a single station. In one particular non-limiting example, the robotic arm 302 may be used to effect shipping packaging operations (picking up multiple objects, such as various packages, pallets, slipsheets, etc.) at a single station without replacing any parts or devices and/or without direct operator action.
For example, during a packaging operation, the object holder assembly 304 may be reconfigured between various modes suitable for holding a cassette 364, a slip sheet 366, a tray 368, and/or any other suitable object (sometimes collectively referred to herein as a target object); engaging with a target object; transporting the target object to one of the transport units 369 in conjunction with the robotic arm 302; and out of the target object to package the transport unit 369. For example, as further shown in fig. 3, the object holding system 300 may stack one of the trays 368 at the base of the transport unit 369; stacking one of the slip sheets 366 on the tray 368; stacking a layer of cassettes 364 (e.g., three cassettes, five cassettes, ten cassettes, or any other suitable number) on a slip sheet 366; any of the previous steps/tasks are then repeated to complete the transport unit 369 and/or to start a new transport unit. In the context of a packaging operation, FIG. 3 may show a box 364, a slip sheet 366, and a tray 368 provided at the start of each task. The transport unit 369 may correspond to a common target/destination location for different tasks.
Additionally or alternatively, the object holding system 300 can be used to unpack the transport unit 369 (or any other group of objects). During the unpacking operation, the object holding component 304 may be reconfigured between various modes suitable for holding a variety of target objects; engaging with one of the target objects; transported by the robotic arm 302 from the transport unit 369 to a destination (e.g., conveyor belt 362 and/or holding location); and disengaging from the clamped target object. For example, the object holding system 300 can move a layer of cassettes 364 in the transport unit 369 onto a carousel (or another suitable destination); moving skid plates 366 corresponding to the layers to a waiting stack (e.g., to be reused and/or disposed of); repeat the previous steps for each layer of cassettes 364 in the transport unit 369; the tray 368 is then moved to a waiting stack (e.g., for reuse and/or disposal). The unpacking operation may then repeat these steps/tasks to unload another transport unit 369. Thus, in the context of an unpacking operation, fig. 3 may show the transport unit 369 as a common starting location for the various tasks. The box 364, skid plate 366, and pallet 368 may represent the target/destination locations for each task.
In some embodiments, the object holding system 300 includes machine vision components (e.g., the imaging device 222 of fig. 2, the processor 202 of fig. 2, or a combination thereof). The machine vision component may image the target object (e.g., any of the cassette 364, the skid plate 366, and/or the tray 368), identify the target object, identify an orientation of the target object, identify a difference in the target object from a desired value (e.g., identify a deviation in a length and/or width of the target object), and/or identify an object that may prevent the object holding system 300 from holding the target object. The object gripping system 300 and/or any controller operatively coupled thereto (e.g., controller 109 of fig. 1) may then adjust the configuration of the object gripping assembly 304 according to the target object and control the movement of the robotic arm to complete the operation (e.g., the packaging and/or unpacking operation discussed above).
In each of the operations discussed above, the object clamping component 304 adjusts between various modes that are specific to the object being clamped during any given task. Such adjustments, discussed in more detail below, may allow the object clamp assembly 304 to more securely clamp different target objects, allow for slight variations in the target objects being clamped, and/or become compact when the clearance of the operational task is limited (e.g., when packing and/or unpacking shipping units in a confined space).
For example, fig. 4 is a schematic side view of an object clamping assembly 400 (e.g., a multi-function end effector) in accordance with some embodiments of the present technology. In the illustrated embodiment, an object holding assembly 400 ("assembly 400") includes a body 402 (also sometimes referred to herein as a "housing") having an upper surface 404a and a lower surface 404 b. The assembly 400 also includes an external connector 406 (also referred to herein as an "interface component") coupled to the upper surface 404a, an imaging component 410 coupled to the body 402, and a vacuum-operated clamping component 420 coupled to the lower surface 404 b.
The external connector 406 can be coupled to another component of the object gripping system (such as the robotic arm 302 of fig. 3) to allow for control of the position and/or orientation of the assembly 400. In some embodiments, the external connector 406 allows the body 402 to be rotatable about a vertical axis through the external connector 406. As discussed above, the imaging component 410 may obtain images of one or more target objects (and any obstructions thereof) that are used to identify the target objects and plan tasks to engage the target objects (e.g., dynamic calculations based on mixed Stock Keeping Unit (SKU) object detection, area and/or resource detection of trays and/or sliders, packaging calculations, etc.).
For example, the vacuum-operated gripping member 420 is positioned to engage a surface of a first category of target objects (e.g., the cassette 364 of fig. 3) under the assembly 400 to lift the target objects. In some embodiments, the vacuum-operated gripping member 420 comprises a foam vacuum gripper that can engage and lift a target object with any portion of the lower surface 422 of the vacuum-operated gripping member 420 (e.g., lift an object engaged by 10%, 20%, 25%, 50%, and/or any other suitable portion of the lower surface 422). In various embodiments, the vacuum-operated clamping component 420 may include one or more suction cups, vacuum pads, vacuum openings, or the like configured to clamp or attach the target object to the assembly 400.
As further shown in fig. 4, the assembly 400 may also include a variable width clamping member 430 coupled to the body 402 (e.g., movably disposed at least partially within the body 402). A variable width gripping member 430 ("gripping member 430") includes a linear extension mechanism 431 defined by two arms 432 on opposite longitudinal sides of the body 402. Each of the arms 432 has a proximal portion (or "proximal end region") coupled to the body 402 and a distal portion (or "end region") protruding from the body 402 on opposite longitudinal sides. Clamping member 430 also includes one or more rotational members 434 (one shown, sometimes referred to herein as a "rotational mechanism") coupled to an end region of each of arms 432 and one or more mechanical clamping members 436 (one shown) coupled to each of rotational members 434. Further, in the embodiment shown, the clamping member 430 includes one or more optional suction members 438 coupled to the mechanical clamping member 436.
In various embodiments, rotating component 434 can include a mechanical drive wheel, a pneumatic drive wheel (e.g., a cylinder drive wheel), a mechanical drive shaft and/or crankshaft, a robotically controlled rotating component, and/or the like. In various embodiments, the mechanical clamping component 436 can include various clamps, vices, jaws, servo-electric clamps, pneumatic clamps, platform-based lifts, and the like.
In the configuration of the assembly 400 shown in fig. 4 (e.g., fully folded state), the arm 432 of the linear extension mechanism 431 is fully retracted into the body 402 and the rotary member 434 is in the raised position. In the raised position, the rotating members 434 direct each of the mechanical clamping members 436 at least partially upward from the longitudinal plane of the lower surface 404 b. In the embodiment shown, the rotating component 434 guides the mechanical clamping component 436 to be perfectly vertical relative to the longitudinal plane. This embodiment may minimize the longitudinal footprint of the assembly 400 in the fully folded state, allowing the assembly 400 to enter areas with a relatively low amount of clearance.
During some tasks of operation, as discussed in more detail below, rotating members 434 may be actuated/dynamically configured to a lowered position, thereby guiding mechanical clamping members 436 and suction members 438 below lower surface 404 b. As a result, the mechanical gripping members 436 may engage and/or disengage different categories of target objects (e.g., a third category, such as including the tray 368 of fig. 3). Additionally or alternatively, the suction component 438 may engage and/or disengage a further category of target objects (e.g., a second category, such as including the skid plate 366 of fig. 3). In addition, each of the arms 432 can be actuated/dynamically configured (e.g., extended, moved, etc. relative to the body 402) to extend and/or retract a distance D between the end regions 1 . Extension and/or retraction allows the gripping members 430 to be adjusted according to the relevant size of the target object (e.g., to account for differences within the second category and/or the third category). By way of example only, in various embodiments, the arm 432 distance D between the end regions 1 May be adjusted between about 540 millimeters (mm) and about 1000 mm, or between about 740 mm and about 1300 mm. In some embodiments, the arms 432 are extended and retracted in series (e.g., symmetrically and/or simultaneously). In some embodiments, each of the arms 432 is independently extendable and retractable, allowing for additional customization of the target object.
Fig. 5A and 5B are isometric views of an object gripping assembly 500 (e.g., a multifunctional end effector) in accordance with some embodiments of the present technology. As shown in fig. 5A and 5B, an object holding assembly 500 ("assembly 500") is generally similar to assembly 400 of fig. 4. For example, the assembly 500 includes a body 502, as well as an external connector 506, an imaging component 510, and a vacuum-operated clamping component 520 coupled to the body 502. In addition, the assembly 500 includes a variable width clamping member 530 coupled to the body 502.
The variable width clamping member 530 ("clamping member 530") includes a linear mechanism 531 having arms 532 on opposite sides of the body 502 and a rotation member 534 connected to an end region of each of the arms 532. The clamping members 530 further include support plates 535 coupled to each of the rotation members 534, one or more mechanical clamping members 536 (two shown for each of the support plates 535) coupled to each of the support plates 535, optional suction members 538 coupled to each of the mechanical clamping members 536 (e.g., four total), and one or more optional pressure cylinders 540 (two shown for each of the support plates 535) coupled to each of the support plates 535 adjacent to the mechanical clamping members 536.
In the embodiment shown, the mechanical clamping member 536 is a stationary portion of the clamp. To clamp a target object (also sometimes referred to herein as picking or lifting a target object), the mechanical clamping member 536 can be inserted below the surface and then the clamping member 530 can be lifted. In turn, the pressure cylinder 540 may help stabilize the target object during the clamping operation through the variable component acting as a clamp. For example, after the mechanical gripping member 536 begins to lift the target object, the pressure cylinder 540 may extend to hold the target object against the mechanical gripping member 536. Additional details regarding examples of stabilization are discussed below with reference to fig. 10A-10E.
Support plates 535 allow the rotation members 534 on each of the end regions to move each of the mechanical clamping members 536, suction members 538, and pressure cylinder 440 between a raised position (fig. 5A) and a lowered position (fig. 5B). Additionally or alternatively, the support plate 535 may help locate the mechanical clamping members 536, suction members 538 and pressure cylinder 540 in a longitudinal plane. For example, as best shown in fig. 5B, the support plates 535 may allow the mechanical clamping members 536 and/or suction members 538 on each longitudinal end of the clamping members 530 to be spaced apart from one another. Additionally or alternatively, the support plate 535 allows for optional components to be included in the clamping component 530. For example, in the embodiment shown in fig. 5A and 5B, support plate 535 provides space for pressure cylinder 540 to fit within clamping member 530.
Fig. 6A-6C illustrate the assembly 500 of fig. 5A and 5B in various states, in accordance with some embodiments of the present technique. More specifically, fig. 6A shows assembly 500 in a fully collapsed state, fig. 6B shows assembly 500 in one of many possible extended states, and fig. 6C shows assembly 500 in a clamped state.
In the fully folded state shown in fig. 6A (also sometimes referred to herein as a "collapsed state," "vacuum gripping configuration," and/or "folded configuration"), the arms 532 of the linear mechanism 531 are fully retracted and/or collapsed into the body 502, and the rotary component 534 is in a raised position to guide the mechanical gripping components 536 and suction components 538 in a vertical direction. The fully folded state results in a relatively small footprint (or minimum footprint) of the assembly 500. The relatively small footprint may allow the assembly 500 to be moved in, out, and/or through spaces with limited clearance. In addition, as further shown in fig. 6A, the fully collapsed state may allow the vacuum operated clamping member 520 to define the lowermost surface of the assembly 500. Accordingly, the outer surface 522 of the vacuum-operated gripping member 520 may engage various target objects for various pick-up tasks of operation.
To transition from the fully collapsed state to the extended state, as shown in fig. 6B, the assembly 500 may actuate the arm 532 of the linear mechanism 531 along the longitudinal path a. In various embodiments, actuation may be driven by various mechanically driven actuators and/or gears, pneumatic actuators (e.g., pneumatic cylinders), robotically controlled components, and the like. The assembly 500 can have any suitable number of extended states. For example, in some embodiments, assembly 500 has only a single extended state corresponding to a fixed width of the target object. In various other examples, assembly 500 may have two, three, five, ten, twenty, or any other suitable extended state, allowing assembly 500 to be deployed to engage with a variety of target objects and/or to accommodate differences in the width of the target objects. In some embodiments, the arms 532 of the linear mechanism 531 may be continuously movable, allowing the assembly 500 to be in any extended state between a fully collapsed state and a maximum extended state. In various embodiments, the extended state may be based on one or more preset conditions (e.g., setting the width for a particular target object), measurements performed by the imaging component 510 (fig. 5A), and/or various controller inputs (e.g., allowing input from a human and/or robotic operator).
To transition to the clamped state, as shown in fig. 6C, assembly 500 may actuate rotation component 534 along rotation path B. This movement moves the mechanical gripping members 536 and suction members 538 from the raised position to the lowered position. Accordingly, the mechanical gripping members 536 and suction members 538 are positioned lower than the vacuum operated gripping members 520, thereby allowing the mechanical gripping members 536 and/or suction members 538 to engage the target object. For example, as the mechanical gripping members 536 rotate to the lowered position, the distance between opposing mechanical gripping members 536 is oriented toward the second distance D 2 And (4) shrinking. Thus, having a distance substantially equal to the second distance D 2 May be gripped by mechanical gripping members 536 that are fixed together and/or may be lifted by a platform defined by the distal ends of the mechanical gripping members 536. In another example, the suction members 538 are directed downward to engage an upper surface of the target object as the suction members 538 are rotated to the lowered position.
In some embodiments, the extended state (fig. 6B) serves as an intermediate state between the fully folded state and the clamped state. That is, in transitioning from the fully collapsed state to the clamped state, the assembly 500 first extends the arms 532 of the linear mechanism 531 to transition to the extended state shown in fig. 6B. Additionally or alternatively, the assembly 500 may transition directly from the fully folded state (fig. 6A) to the clamped state. Thus, the assembly may be adjusted to hold a variety of target objects having different sizes. In various embodiments, the configuration of the variable width gripping member 530 in the clamped state may be based on object identification from the imaging member 510 (fig. 5A), one or more measurements from the imaging member 510, one or more preset devices for the identified target object, various controller inputs (e.g., allowing input from a human and/or robotic operator), and the like.
Further, while primarily discussed herein as transitioning to the clamped state after deployment to a desired width (e.g., transitioning from either the fully collapsed state and/or the extended state), the assembly 500 may transition to the clamped state prior to and/or concurrently with deployment. For example only, the assembly 500 may dynamically configure the rotational component 534 along the rotational path B (fig. 6C), while also dynamically configuring the arm 532 of the linear mechanism 531 along the longitudinal path a (fig. 6B). In another example, the assembly 500 may actuate the rotation component 534 and then the arm 532 of the linear mechanism 531. Indeed, in some embodiments (e.g., when performing successive tasks using the mechanical clamping component 536 and/or the suction component 538), the assembly 500 actuates the rotating component 534 once and then actuates the arm 532 of the linear mechanism 531 multiple times to adjust the width according to different target objects. Additionally or alternatively, the assembly 500 may actuate the arms 532 of the linear mechanism 531 between extended states to facilitate clamping the target object (e.g., reduce the width between opposing mechanical gripping members 536 to clamp the target object).
Fig. 7 is a flow diagram of a process 700 for operating a robotic system having an object gripping assembly, in accordance with some embodiments of the present technique. The process 700 may be performed by a controller on the end effector itself and/or by an external controller (e.g., the controller 109 of fig. 1 with the processor 202 of fig. 2).
The process 700 begins at block 702 by detecting a target object. The detection may be based on image data from an image sensor and/or imaging system (e.g., imaging component 410 of fig. 4) on the object holding assembly. In some embodiments, the detection is based at least in part on a machine or computer vision algorithm used to identify patterns in the image data to detect one or more known target objects and/or reject imaged objects as non-target objects. In some embodiments, detection is based at least in part on artificial intelligence and/or machine learning algorithms trained to identify objects (targets and non-targets) in the image data. In various embodiments, detection may be based, at least in part, on input from various sensors (e.g., weight sensors, external imaging sensors, etc.) and/or input from a human and/or robotic operator.
In addition to detecting the target object at block 702, the process 700 may also detect various aspects of the target object. For example, the process 700 may detect a size of the target object, an orientation of the target object, a gap around the target object for the task during the clamping operation, and so on. These detections may allow process 700 to account for differences from expected values, for example, in identifying a target object.
Since the image sensor and/or imaging system is coupled to the object holding assembly, the image sensor and/or imaging system will typically image the target object (and/or any surrounding environment) at an angle relative to the vertical axis, rather than from directly above. Thus, a machine or computer vision algorithm may include functionality to consider the angular image (e.g., by applying distortion or other corrective filtering to the image data). Additionally or alternatively, the machine or computer vision algorithms may include functionality to identify when the target object (and/or any surrounding environment) is not being imaged frontally and take corrective action. In some embodiments, the corrective action includes applying one or more distortions and/or image corrections to the image data to measure a single side of the target object. In some embodiments, the corrective action includes generating instructions for collecting additional image data to properly image the target object. By identifying and considering angles in the image data, machine or computer vision algorithms may help improve the accuracy of subsequent stages of the measurement and/or clamping operation.
Further, because the image sensor and/or imaging system is coupled to the object holding assembly, the position of the image sensor and/or imaging system can be dynamically controlled throughout operation. For example, when the stack of pallets and/or trays shrinks (or increases) during the packaging operation, the object holding assembly can be lowered (or raised) to image the pallets and/or trays at a consistent distance. That is, dynamic control of the position of the image sensor and/or imaging system may allow the image data to have a consistent distance between the image sensor and/or imaging system and the target object. In turn, the consistent distance may help to improve the accuracy of subsequent stages of the measuring and/or clamping operation.
At block 704, the process 700 includes: a pick up task for the target object is planned. Planning a pick task may include: a state (e.g., a fully collapsed state, a combination of a fully collapsed state and a clamped state, and/or a combination of an extended state and a clamped state) is determined in which the object holding assembly should be used to pick up the target object. The scheduled pick up tasks may further include: an orientation of the target object is identified during the pick up task and/or a path of travel of the object holding assembly is identified during the pick up task. The orientation may be based on the size and orientation of the target object and/or the available surface. The travel path may be based on any identified environmental constraints (e.g., objects that limit identified gaps around the object).
At block 706, the process 700 includes: the object clamping component is configured to the clamped state determined at block 704. The configuration may include any of the actions discussed above with respect to fig. 6A-6C to prepare the target object to be clamped with the vacuum operated clamping means, the mechanical clamping means, and/or the suction means. In some embodiments, configuring the object holding assembly comprises: a plurality of steps of dynamically configuring the object holding assembly (e.g., extending an arm of the linear mechanism and then actuating the rotating member from the raised position to the lowered position). In some embodiments, configuring the object holding assembly includes a single step with multiple actions (e.g., extending the arms of the linear mechanism and simultaneously rotating the raised position to the lowered position).
At block 708, the process 700 includes: the target object is picked up. In some embodiments, picking up the target object comprises: the surface of the target object is engaged with a vacuum-operated gripping member and a vacuum force is applied to the engaged surface. In some embodiments, picking up the target object comprises: the target object is clamped with a mechanical clamping means. In some embodiments, picking up the target object comprises: the mechanical clamping member is positioned at least partially below a clamping surface of the target object. In some embodiments, picking up the target object comprises: the surface of the target object is engaged with the suction member and a suction force is applied to the engaged surface.
At block 710, the process 700 includes: the target object is transported from a first location (e.g., a pick-up location) to a second location (e.g., a drop-off location). The transportation may be based on a predetermined travel path to avoid collision with any object in the surrounding environment. In one particular non-limiting example, the first location may be a conveyor belt that transports the box of consumer goods to a loading station having a robotic system, while the second location is a large shipping component (e.g., a stack of trays, a larger box, a shipping container, etc.). In this example, the process 700 can automatically wrap multiple objects for transport without rotating between various object clamping systems, thereby speeding up the wrapping process.
At block 712, the process 700 includes: the target object is placed at the second location. In some embodiments, the placement process at block 712 includes: the object is placed precisely at a second location (e.g., a packaging location in a large shipping unit). In some embodiments, the placement process at block 712 includes: avoiding any environmental objects (e.g., previously placed target objects) at the second location.
At optional block 714, process 700 includes: the object holding assembly is reset. Resetting the object holding assembly may include: collapsing the object holding assembly from any extended and/or clamped state to a fully folded state. The collapsing process may allow the object holding assembly to more easily avoid other environmental objects (e.g., previously placed objects) at the second location and/or avoid when picking up new target objects. Additionally or alternatively, the resetting the object holding assembly may comprise: the object holding assembly is returned to the starting position to detect the next target object. However, in some embodiments, the object clamping assembly is not reconfigured (or completely reconfigured) between target objects (e.g., does not transition out of the clamped state). The default reset may allow the object holding assembly to more quickly perform a series of pick up tasks to complete the operation, especially for generally similar target objects.
Fig. 8A-8C are partial schematic diagrams of an object holding assembly 800 at various stages of a process for holding a target object 802 in accordance with some embodiments of the present technology. Fig. 8A-8C omit the robotic arm or other mechanism for moving and positioning object gripper assembly 800 to avoid obscuring aspects of the invention.
Fig. 8A illustrates an object holder assembly 800 (and/or a controller communicatively coupled thereto) that detects a target object 802 within a field of view 812 of an imaging component 810 (e.g., an example of the imaging device 222 of fig. 2). As discussed above with reference to fig. 7, the detection may be based on various machine or computer vision algorithms, artificial intelligence, and/or machine learning algorithms, and the like. Further, the detection may identify a type of the target object, an orientation of the target object, various sizes of the target object, and so forth. For example, in the illustrated embodiment, detection may identify the target object 802 as a box (e.g., containing a consumable, various food items, etc.). As discussed further above, the object clamping assembly 800 (and/or a controller communicatively coupled thereto) may plan various tasks for the clamping operation for picking, transporting, and/or placing the target object 802.
As shown in fig. 8B and 8C, the clamping operation may perform one or more tasks to engage a surface (e.g., an upper surface) of the target object 802 with a lower surface 822 of the vacuum-operated component 820 of the object clamping assembly 800. Once engaged, the vacuum operated component 820 may act to provide a vacuum force to the target object 802, and the object holding assembly 800 may then be lifted along the movement path C. As a result of the motion and vacuum force, the object clamp assembly 800 lifts the target object 802.
In the illustrated embodiment, the clamping operation is performed with the object clamping assembly 800 in a fully folded state. This configuration allows the lower surface 822 of the vacuum operated member 820 to define the lowermost surface of the object holder assembly 800. Thus, the lower surface 822 is able to engage the target object 802 without any risk of interference from other components of the object holder assembly 800. However, in some embodiments, the object holding assembly 800 may be in an extended state (e.g., fig. 6B) and/or a clamped state (e.g., fig. 6C) provided that the lower surface 822 is capable of engaging the target object 802.
Fig. 9A-9C are partial schematic diagrams of an object holding assembly 900 at various stages of a process for holding a target object 902, in accordance with some embodiments of the present technique. Fig. 9A-9C omit the robotic arm or other mechanism for moving and positioning object gripper assembly 900 to avoid obscuring aspects of the invention.
Similar to the illustration in fig. 8A, fig. 9A shows an object holder assembly 900 (and/or a controller communicatively coupled thereto) detecting a target object 902 within a field of view 912 of an imaging component 910. As discussed above, detection may be based on various machine or computer vision algorithms, artificial intelligence, and/or machine learning algorithms, among others. Further, the detection may identify a type of the target object, an orientation of the target object, various sizes of the target object, and so forth. For example, in the illustrated embodiment, detection may identify target object 902 as a stack of one or more sliders that may be placed between levels of large transport components to reduce the risk of damage between objects in each layer. As discussed further above, the object clamping assembly 900 (and/or a controller communicatively coupled thereto) may plan various tasks for the clamping operation for picking, transporting, and/or placing the target object 902.
Fig. 9B and 9C illustrate the object holding assembly 900 transitioning from a fully folded state (fig. 9B) to a clamped state (fig. 9C) suitable for the target object 902. In the illustrated embodiment, the transition includes actuating the rotary component 934 from the raised position to the lowered position along the rotational path B. As shown in fig. 9C, in the lowered position, the rotation component 934 may direct one or more suction components 938 (two shown) downward to engage the target object 902. The suction member 938 may then apply suction to the target object 902, allowing the object clamp assembly 900 to lift the target object 902 along the movement path D.
In some embodiments, the transition from the fully folded state (fig. 9B) to the clamped state (fig. 9C) may include: the object holding assembly 900 is unfolded to increase the distance between the suction members 938. The deployment may allow the object clamp assembly 900 to be adjusted to pick up objects with larger footprints using the suction members 938.
Fig. 10A-10E are partially schematic illustrations of an object holding assembly 1000 at various stages of a process for holding a target object 1002, in accordance with some embodiments of the present technique. Fig. 10A-10E omit the robotic arm or other mechanism for moving and positioning object gripper assembly 1000 to avoid obscuring aspects of the invention.
Fig. 10A illustrates an object holder assembly 1000 (and/or a controller communicatively coupled thereto) detecting a target object 1002 within a field of view 1012 of an imaging assembly 1010. As discussed above, the detection may be based on various machine or computer vision algorithms, artificial intelligence, and/or machine learning algorithms, among others. Further, the detection may identify a type of the target object, an orientation of the target object, various sizes of the target object, and so forth. For example, in the illustrated embodiment, detection may identify the target object 1002 as a stack of one or more trays, each of which may be individually placed at the bottom of a large shipping unit (e.g., a tray of consumer goods) to provide structural support to each unit. The object clamp assembly 1000 (and/or a controller communicatively coupled thereto) may then plan various tasks for the clamping operation to pick, transport, and/or place the target object 1002. For example, the uppermost pallet 1004 in a stack of one or more pallets may have a width that differs from standard pallets (e.g., due to lack of wooden slats, common differences in pallet sizes, etc.), requiring the object holding assembly 1000 (and/or a controller communicatively coupled thereto) to plan a particular task for the uppermost pallet 1004.
Fig. 10B-10E illustrate the object holding assembly 1000 transitioning from a fully folded state (fig. 10B) to a clamped state (fig. 10E) suitable for the uppermost tray 1004. As shown in FIG. 10B, the uppermost tray 1004 has a third distance D 3 Is measured. The transformation process may begin with positioning the body 1001 of the object holding assembly 1000 within the longitudinal footprint of the tray defined by the width.
As shown in fig. 10C, the transition process may continue by moving the legs 1032 along the linear path a to extend the linear mechanism 1031 such that the opposing rotational components 1034 are spaced apart from each other by a third distance D 3 And the rotary member 1034 is actuated from the raised position to the lowered position along the rotation path B. As a result, two or more mechanical clamping members 1036 (two shown on opposite sides in fig. 10C) can engage the uppermost tray 1004. In some embodiments, the conversion process is completed in the arrangement shown in fig. 10C, and the uppermost tray 1004 can be lifted by the object holding assembly 1000. However, in some embodiments, the conversion process includes additional steps to further improve the stability of the clamping of the object clamping assembly 1000 on the uppermost tray 1004.
For example, as shown in fig. 10D, the transition process may include acting one or more pressure cylinders 1040 (two shown on opposite sides in fig. 10D) along a vertical path E. The pressure cylinder 1040 further engages the uppermost tray 1004 by pressing the uppermost tray 1004 against the mechanical clamping member 1036. Accordingly, the pressure cylinders 1040 may help stabilize the uppermost tray 1004 during various tasks of the clamping operation (e.g., by reducing the ability of the uppermost tray 1004 to move on the mechanical clamping members 1036).
Fig. 10E is a detailed view of object clamping assembly 1000 fully transitioned to the clamped state. As shown in fig. 10E, the pressure cylinder 1040 may be pushed down onto the upper surface 1005a of the uppermost tray 1004 while the mechanical clamping members 1036 engage the lower surface 1005b. During various tasks of the clamping operation, opposing forces may secure the uppermost tray 1004, thereby preventing the uppermost tray 1004 from sliding, rocking, and/or otherwise moving away from the mechanical clamping component 1036.
Fig. 11 is an illustration of an object holding system 1100 in accordance with other embodiments of the present technology. In the illustrated embodiment, object gripping system 1100 includes a robotic arm 1102 and an object gripping assembly 1104 coupled to robotic arm 1102 by an external connector 1108 of object gripping assembly 1104. As further shown in fig. 11, the object holding assembly 1104 is substantially similar to the object holding assembly discussed above.
For example, the subject clamp assembly 1104 includes a body 1106, and an imaging component 1110 and a vacuum-operated clamp component 1120 each coupled to the body 1106. In addition, the object clamp assembly 1104 includes a variable width clamp member 1130 coupled to the body 1106. The variable width gripping member 1130 includes a linearly extending member 1130 coupled to the body 1106 and configured to expand a longitudinal footprint of the object gripping assembly 1104. The variable width clamping member 1130 also includes a rotational member 1134 coupled to each side of the linear expansion member 1131, a support plate 1135 coupled to each of the rotational members 1134, and one or more mechanical clamping members 1136 (two shown) coupled to each of the support plates 1135.
However, in the illustrated embodiment, the linear extension member 1131 includes an extendable track coupled to the body 1106 to increase and/or decrease the distance between the opposing rotational members 1134. In addition, the variable width gripping member 1130 omits a suction member. Omission may reduce the longitudinal footprint of the object holding assembly 1104 in the fully folded state and/or reduce the vertical footprint of the object holding assembly 1104 in the clamped state. Both reductions may allow the object holder assembly 1104 to operate in a narrower space.
Examples
The present techniques are illustrated, for example, in accordance with various aspects described below. For convenience, various examples of aspects of the present technology are described as numbered examples (1, 2, 3, etc.). These are provided as examples and do not limit the present technology. It should be noted that any dependent examples may be combined in any suitable manner and placed in the respective independent examples. Other examples may be presented in a similar manner.
1. An exemplary object holding assembly, comprising:
a body having an upper surface and a lower surface opposite the upper surface;
a vacuum-operated gripping member coupled to the lower surface, the vacuum-operated gripping member configured to engage a first target object of a first category; and
a variable width gripping member coupled to the body, the variable width gripping member comprising:
a first arm coupled to the body and having a first end region at a first longitudinal side of the body;
a first mechanical gripping component coupled to the first end region of the first arm, the first mechanical gripping component comprising one or more first grippers configured to engage a first side of a second target object of a second category, wherein the second target object is different in category from the first target object;
a first rotation member coupling the first mechanical clamping member to the first arm, the first rotation member movable between a raised position and a lowered position;
a second arm coupled to the body and having a second end region at a second longitudinal side of the body opposite the first longitudinal side;
a second mechanical gripping component coupled to the second end region of the second arm, the second mechanical gripping component comprising one or more second grippers configured to engage a second side of the second target object; and
a second rotation member coupling the second mechanical gripping member to the second arm, the second rotation member movable between the raised position and the lowered position;
wherein the first arm and the second arm are extendable in a linear direction to control a distance between the first end region and the second end region.
2. The assembly of embodiment 1 or a portion thereof, wherein the second target object of the second category comprises a pallet, and wherein the variable width gripping member further comprises:
one or more first suction components coupled to the first mechanical gripping component, wherein the one or more first suction components are configured to engage an upper surface of a third target object of a third category different from the first category and the second category, the third category including a slipsheet; and
one or more second suction components coupled to the second mechanical gripping component, wherein the one or more second suction components are configured to engage the upper surface of the second target object,
wherein the first target object of the first category comprises one or more parcels targeted for placement on the pallet and/or the slip sheet.
3. The assembly of any one of embodiments 1-2 or any portion thereof, wherein:
the first mechanical clamping member further comprises a first support plate coupled between the one or more first clamps and the first rotating member; and is
The second mechanical clamping component further includes a second support plate coupled between the one or more second clamps and the second rotating component.
4. The assembly of any one of embodiments 1-3 or any portion thereof, wherein the one or more first clamps and the one or more second clamps are positioned to engage a lower surface of the target object on the first side and the second side when the first and second rotational components are in the lowered position, and wherein:
the first mechanical clamping member further comprises one or more first pressure cylinders coupled to the first support plate and positioned to apply a compressive force on the first side to an upper surface of the second target object; and is
The second mechanical clamping component further comprises one or more second pressure cylinders coupled to the second support plate and positioned to apply the compressive force to the upper surface of the second target object on the second side.
5. The assembly of any one of embodiments 1-4 or any portion thereof, further comprising an imaging sensor coupled to the body and positioned to collect image data of a subject located within a proximal side of the subject gripping assembly, wherein the imaging sensor is operably coupled to a controller to send the image data to the controller to determine a category of the subject and whether to grip the subject with the vacuum operated gripping member or with the variable width gripping member.
6. The assembly of any one of embodiments 1-5 or any portion thereof, wherein the vacuum-operated gripping component comprises a foam suction gripper configured to engage a second target object different from the first target object.
7. The assembly of any one of embodiments 1-6 or any portion thereof, further comprising an external connector coupled to the upper surface of the body and operably coupleable to a robotic arm, wherein the variable width gripping member is positioned to extend along a first axis, and wherein the external connector is positioned to control rotation of the body about a second axis orthogonal to the first axis.
8. The assembly of any one of embodiments 1-7 or any portion thereof, wherein the first rotational member and the second rotational member move 180 degrees between the raised position and the lowered position.
9. An exemplary method, comprising:
generating a command to position the object gripping assembly adjacent to a target object with a robotic arm;
generating commands to dynamically configure the object gripping assembly into a gripping state, the gripping state comprising at least one of a fully collapsed state, a plurality of extended states, and a clamped state,
generating a command to engage the target object once the object clamping component is in the clamped state;
generating a command to move the target object with the robotic arm from a first position to a second position spaced apart from the first position; and
generating a command to detach from the target object at the second location.
10. The method of embodiment 9, wherein:
the object holding assembly comprises:
a main body;
a vacuum operated clamping member coupled to a lower surface of the body; and
a variable width gripping member coupled to the body and comprising two or more mechanical gripping elements;
in the fully folded state, the variable width gripping members are fully retracted and the vacuum operated gripping members are positioned to define a lowermost surface of the object gripping assembly to engage the target object;
each of the plurality of extended states varies a distance between the two or more mechanical clamping elements of the variable width clamping component; and is
In the clamped state, the two or more mechanical clamping elements are positioned to engage the target object.
11. The method of any one of embodiments 9-10 or any portion thereof, further comprising:
receiving image data of the target object; and
determining one or more characteristics of the target object from the image data, the one or more characteristics including at least one of: a category of the target object, an orientation of the target object, and a candidate grip location on the target object,
wherein the command to dynamically configure the object gripping component to the gripping state is based at least in part on the one or more characteristics of the target object determined from the image data.
12. The method of any one of embodiments 9-11, or any portion thereof, further comprising:
identifying an angle of the image data relative to a vertical axis; and
adjusting the image data using one or more distortions based on the identified angle.
13. The method of any one of embodiments 9-12, or any portion thereof, further comprising:
identifying an angle of the image data relative to a face of the target object; and
adjusting the image data using one or more distortions based on the identified angle.
14. The method of any one of embodiments 9-13, or any portion thereof, further comprising:
identifying a first angle of the image data relative to a face of the target object; and
generating instructions to generate new image data of the target object at a second angle that is orthogonal relative to the face of the target object.
15. The method of any one of embodiments 9-14 or any portion thereof, wherein the target object is a first target object and the grip state is a first grip state, and wherein the method further comprises:
generating a command to position the object gripping assembly adjacent to a second target object with the robotic arm;
generating a command to dynamically configure the object gripping assembly into a second gripping state, the second gripping state comprising at least one of the fully collapsed state, the plurality of extended states, and the clamped state;
generating a command to engage the second target object once the object clamping assembly is in the second clamped state;
generating a command to move the second target object with the robotic arm from a third position to a fourth position spaced apart from the third position; and
generating a command to detach the second target object at the fourth location.
16. The method of any one of embodiments 9-15 or any portion thereof, wherein the target object is one of a plurality of different target objects, wherein the plurality of different target objects includes one or more parcels, one or more pallets, and one or more skid plates, and wherein the method further comprises:
identifying an operation for manipulating the plurality of different target objects, the operation comprising a sequence of tasks corresponding to manipulating the plurality of different target objects, wherein the operation is a wrapping operation or an unwrapping operation; and
iteratively selecting each of the plurality of different target objects as the target object based on the identified operations and task sequences, and generating corresponding commands for positioning, dynamically configuring, engaging, moving, and disengaging the selected target object.
17. An exemplary robotic system, comprising:
a robot arm; and
an object gripping assembly coupled to the robotic arm, the object gripping assembly comprising:
a body having an upper surface and a lower surface opposite the upper surface;
an external connector coupled between the upper surface of the main body and the robotic arm;
a vacuum operated clamping member coupled to the lower surface of the body; and
a variable width clamping member coupled to the body, the variable width clamping member being movable between a fully collapsed state, a plurality of extended states, and a clamped state.
18. The robotic system of embodiment 17, wherein the variable width gripping member comprises:
a linear extension mechanism coupled to the body, the linear extension mechanism having a first distal region and a second distal region opposite the first distal region;
two rotation mechanisms each coupled to the first distal region and the second distal region of the linear extension mechanism; and
one or more mechanical grippers coupled to each of the rotating mechanisms.
19. The robotic system of any one of embodiments 17-18 or any portion thereof, wherein in the fully folded state:
the linear extension mechanism is retracted to position the first and second distal regions of the linear extension mechanism at a minimum distance apart; and is
The rotation mechanism is in a raised position, wherein the raised position of the rotation mechanism guides the one or more mechanical grippers coupled to each of the rotation mechanisms away from the lower surface of the body.
20. The robotic system of any one or any portion of embodiments 17-18, wherein:
the object holding assembly further comprises an imaging sensor coupled to the body and positioned to collect image data of an object held by the vacuum operated holding member and/or the variable width holding member; and is
The robotic system further includes a controller operably coupled to the imaging sensor, the vacuum operated gripping member, and the variable width gripping member to:
receiving the image data from the imaging sensor;
determining which of a fully collapsed state and a plurality of extended states to use to clamp the object, the determination based on at least one of: a category of the object, an orientation of the object, and a candidate grip location on the object;
moving the variable width gripping member to the determined state; and is
Controlling the vacuum operated gripping member and/or the variable width gripping member to grip the object.
21. An example end effector assembly, comprising:
a main body;
a first clamp member coupled to and located on the body, the clamp member configured to engage a first category of objects; and
a second clamp member coupled to the body, the second clamp member configured to engage a second category of objects.
22. The assembly of embodiment 21, further comprising a third clamp member coupled to the body and/or the second clamp member, the third clamp member configured to engage a third category of objects.
23. A system comprising a robotic arm operably coupled to the assembly of any of embodiments 21-22 or any portion thereof.
24. A system comprising a controller communicatively coupled to the component and/or system of any of embodiments 21-23 or any portion thereof, wherein the controller is configured to implement a method of operating a component the component and/or system to adjust a configuration of the component to selectively grip and/or transfer an object (e.g., as in any one or more or portion of embodiments 9-16) belonging to one of the second category, and the third category.
Conclusion
From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. In the event that any material incorporated by reference herein conflicts with the present disclosure, the present disclosure controls. Where the context permits, singular or plural terms may also include the plural or singular terms, respectively. Furthermore, unless the word "or" is expressly limited to mean only a single item excluding other items in a list of two or more items, the use of "or" in such a list should be interpreted as including: any single item in the list, (b) all items in the list, or (c) any combination of items in the list. Further, as used herein, the phrase "and/or" as in "a and/or B" refers to a alone, B alone, and both a and B. Furthermore, the terms "comprising," "including," "having," and "with" are used throughout to mean including at least one or more of the recited features, such that any greater number of the same features and/or additional types of other features are not excluded.
It should also be appreciated from the foregoing that various modifications can be made without departing from the disclosure or techniques. For example, one of ordinary skill in the art can appreciate that the various components of the present technology can be further divided into sub-components, or the various components and functions of the present technology can be combined and integrated. Moreover, certain aspects of the techniques described in the context of particular embodiments may also be combined or eliminated in other embodiments. Moreover, while advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the present disclosure and associated techniques may encompass other embodiments not explicitly shown or described herein.

Claims (20)

1. An object holding assembly, comprising:
a body having an upper surface and a lower surface opposite the upper surface;
a vacuum-operated gripping member coupled to the lower surface, the vacuum-operated gripping member configured to engage a first target object of a first category; and
a variable width gripping member coupled to the body, the variable width gripping member comprising:
a first arm coupled to the body and having a first end region at a first longitudinal side of the body;
a first mechanical gripping component coupled to the first end region of the first arm, the first mechanical gripping component comprising one or more first grippers configured to engage a first side of a second target object of a second category, wherein the second target object is different in category from the first target object;
a first rotational component coupling the first mechanical gripping component to the first arm, the first rotational component movable between a raised position and a lowered position;
a second arm coupled to the body and having a second end region at a second longitudinal side of the body opposite the first longitudinal side;
a second mechanical gripping component coupled to the second end region of the second arm, the second mechanical gripping component comprising one or more second grippers configured to engage a second side of the second target object; and
a second rotation member coupling the second mechanical gripping member to the second arm, the second rotation member movable between the raised position and the lowered position,
wherein the first arm and the second arm are extendable in a linear direction to control a distance between the first end region and the second end region.
2. The object holding assembly of claim 1, wherein the second target object of the second category comprises a pallet, and wherein the variable width holding member further comprises:
one or more first suction components coupled to the first mechanical gripping component, wherein the one or more first suction components are configured to engage an upper surface of a third target object of a third category different from the first category and the second category, the third category including a slipsheet; and
one or more second suction components coupled to the second mechanical gripping component, wherein the one or more second suction components are configured to engage the upper surface of the second target object,
wherein the first target object of the first category comprises one or more parcels targeted for placement on the pallet and/or the slip sheet.
3. The object holding assembly of claim 1, wherein:
the first mechanical clamping component further comprises a first support plate coupled between the one or more first clamps and the first rotating component; and is provided with
The second mechanical clamping component further includes a second support plate coupled between the one or more second clamps and the second rotating component.
4. The object holding assembly of claim 3, wherein the one or more first clamps and the one or more second clamps are positioned to engage a lower surface of the second target object on the first side and the second side when the first and second rotational members are in the lowered position, and wherein:
the first mechanical clamping member further comprises one or more first pressure cylinders coupled to the first support plate and positioned to apply a compressive force on the first side to an upper surface of the second target object; and is
The second mechanical clamping component further comprises one or more second pressure cylinders coupled to the second support plate and positioned to apply the compressive force to the upper surface of the second target object on the second side.
5. The object holding assembly of claim 1, further comprising an imaging sensor coupled to the body and positioned to collect image data of an object located within a proximal side of the object holding assembly, wherein the imaging sensor is operably coupled to a controller to send the image data to the controller to determine a category of the object and whether to hold the object with the vacuum operated holding member or the variable width holding member.
6. The object clamping assembly of claim 1, wherein the vacuum-operated clamping component comprises a suction clamp configured to engage a second target object different from the first target object.
7. The object gripping assembly of claim 1, further comprising an external connector coupled to the upper surface of the body and operably coupleable to a robotic arm, wherein the variable width gripping member is positioned to extend along a first axis, and wherein the external connector is positioned to control rotation of the body about a second axis orthogonal to the first axis.
8. The object holding assembly of claim 1, wherein the first and second rotational members move 180 degrees between the raised and lowered positions.
9. A method of operating a robotic system having an object gripping assembly, the method comprising:
generating a command to position the object gripping assembly adjacent to a target object with a robotic arm;
generating a command to dynamically configure the object gripping assembly into a gripping state, the gripping state comprising at least one of a fully collapsed state, a plurality of extended states, and a clamped state,
generating a command to engage the target object once the object clamping component is in the clamped state;
generating a command to move the target object with the robotic arm from a first position to a second position spaced apart from the first position; and
generating a command to disengage the target object at the second location.
10. The method of claim 9, wherein:
the object holding assembly comprises:
a main body;
a vacuum operated clamping member coupled to a lower surface of the body; and
a variable width gripping member coupled to the body and comprising two or more mechanical gripping elements;
in the fully folded state, the variable width gripping member is fully retracted and the vacuum operated gripping member is positioned to define a lowermost surface of the object gripping assembly to engage the target object;
each of the plurality of extended states varies a distance between the two or more mechanical clamping elements of the variable width clamping component; and is
In the clamped state, the two or more mechanical clamping elements are positioned to engage the target object.
11. The method of claim 9, further comprising:
receiving image data of the target object; and
determining one or more characteristics of the target object from the image data, the one or more characteristics including at least one of: a category of the target object, an orientation of the target object, and a candidate gripping location on the target object,
wherein the command to dynamically configure the object gripping component to the gripping state is based at least in part on the one or more characteristics of the target object determined from the image data.
12. The method of claim 11, further comprising:
identifying an angle of the image data relative to a vertical axis; and
adjusting the image data using one or more distortions based on the identified angle.
13. The method of claim 11, further comprising:
identifying an angle of the image data relative to a face of the target object; and
adjusting the image data using one or more distortions based on the identified angle.
14. The method of claim 11, further comprising:
identifying a first angle of the image data relative to a face of the target object; and
generating instructions to generate new image data of the target object at a second angle that is orthogonal relative to the face of the target object.
15. The method of claim 9, wherein the target object is a first target object and the gripping state is a first gripping state, and wherein the method further comprises:
generating a command to position the object gripping assembly adjacent to a second target object with the robotic arm;
generating a command to dynamically configure the object gripping assembly into a second gripping state, the second gripping state comprising at least one of the fully collapsed state, the plurality of extended states, and the clamped state;
generating a command to engage the second target object once the object clamping assembly is in the second clamped state;
generating a command to move the second target object with the robotic arm from a third position to a fourth position spaced apart from the third position; and
generating a command to detach the second target object at the fourth location.
16. The method of claim 9, wherein the target object is one of a plurality of different target objects, wherein the plurality of different target objects includes one or more packages, one or more pallets, and one or more slip sheets, and wherein the method further comprises:
identifying an operation for manipulating the plurality of different target objects, the operation comprising a sequence of tasks corresponding to manipulating the plurality of different target objects, wherein the operation is a wrapping operation or an unwrapping operation; and
iteratively selecting each of the plurality of different target objects as the target object based on the identified sequence of operations and tasks, and generating corresponding commands for positioning, dynamically configuring, engaging, moving, and disengaging the selected target object.
17. A robotic system, comprising:
a robot arm; and
an object gripping assembly coupled to the robotic arm, the object gripping assembly comprising:
a body having an upper surface and a lower surface opposite the upper surface;
an external connector coupled between the upper surface of the body and the robotic arm;
a vacuum operated clamping member coupled to the lower surface of the body; and
a variable width clamping member coupled to the body, the variable width clamping member being movable between a fully collapsed state, a plurality of extended states, and a clamped state.
18. The robotic system as set forth in claim 17 wherein said variable width gripping member includes:
a linear extension mechanism coupled to the body, the linear extension mechanism having a first distal region and a second distal region opposite the first distal region;
two rotation mechanisms each coupled to the first distal region and the second distal region of the linear extension mechanism; and
one or more mechanical grippers coupled to each of the rotating mechanisms.
19. The robotic system as set forth in claim 18 wherein in said fully folded state:
the linear extension mechanism is retracted to position the first and second distal regions of the linear extension mechanism at a minimum distance apart; and is
The rotation mechanism is in a raised position, wherein the raised position of the rotation mechanism guides the one or more mechanical grippers coupled to each of the rotation mechanisms away from the lower surface of the body.
20. The robotic system as set forth in claim 17 wherein:
the object holding assembly further comprises an imaging sensor coupled to the body and positioned to collect image data of an object held by the vacuum operated holding member and/or the variable width holding member; and is
The robotic system further includes a controller operably coupled to the imaging sensor, the vacuum operated gripping member, and the variable width gripping member to:
receiving the image data from the imaging sensor;
determining which of the fully collapsed state and the plurality of extended states to use to clamp the object, the determination based on at least one of: a category of the object, an orientation of the object, and a candidate grip location on the object;
moving the variable width gripping member to the determined state; and is provided with
Controlling the vacuum operated gripping member and/or the variable width gripping member to grip the object.
CN202211009384.4A 2021-08-13 2022-08-15 Robot system with gripping mechanism and related systems and methods Pending CN115592691A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US202163232663P 2021-08-13 2021-08-13
US63/232,663 2021-08-13
US17/885,366 2022-08-10
US17/885,366 US20230052763A1 (en) 2021-08-13 2022-08-10 Robotic systems with gripping mechanisms, and related systems and methods
CN202210977869.6A CN115703244A (en) 2021-08-13 2022-08-15 Robot system with gripping mechanism and related systems and methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210977869.6A Division CN115703244A (en) 2021-08-13 2022-08-15 Robot system with gripping mechanism and related systems and methods

Publications (1)

Publication Number Publication Date
CN115592691A true CN115592691A (en) 2023-01-13

Family

ID=84887178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211009384.4A Pending CN115592691A (en) 2021-08-13 2022-08-15 Robot system with gripping mechanism and related systems and methods

Country Status (1)

Country Link
CN (1) CN115592691A (en)

Similar Documents

Publication Publication Date Title
CN110329710B (en) Robot system with robot arm adsorption control mechanism and operation method thereof
US11358811B2 (en) Vision-assisted robotized depalletizer
JP7362755B2 (en) Robotic palletization and depalletization of multiple item types
KR20210149091A (en) Robot and method for palletizing boxes
JP2022108283A (en) Robot system equipped with gripping mechanism
WO2023193773A1 (en) Robotic systems with object handling mechanism and associated systems and methods
JP7246602B2 (en) ROBOT SYSTEM WITH GRIP MECHANISM, AND RELATED SYSTEMS AND METHODS
CN115592691A (en) Robot system with gripping mechanism and related systems and methods
US20230278208A1 (en) Robotic system with gripping mechanisms, and related systems and methods
JP7264387B2 (en) Robotic gripper assembly for openable objects and method for picking objects
CN114474136A (en) Robot system with clamping mechanism
US20230050326A1 (en) Robotic systems with multi-purpose labeling systems and methods
CN217776996U (en) End effector and robot system
US20240132304A1 (en) Vision-assisted robotized depalletizer
TW202327943A (en) Stack containment structure
CN115258510A (en) Robot system with object update mechanism and method for operating the robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination