AU2023202325A1 - System and method for tug-boat line transfer - Google Patents

System and method for tug-boat line transfer Download PDF

Info

Publication number
AU2023202325A1
AU2023202325A1 AU2023202325A AU2023202325A AU2023202325A1 AU 2023202325 A1 AU2023202325 A1 AU 2023202325A1 AU 2023202325 A AU2023202325 A AU 2023202325A AU 2023202325 A AU2023202325 A AU 2023202325A AU 2023202325 A1 AU2023202325 A1 AU 2023202325A1
Authority
AU
Australia
Prior art keywords
manipulator
tug
vessel
line
boat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2023202325A
Inventor
Vincent DEN HERTOG
Darren HASS
Oscar LISAGOR
Michael Shives
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Allan Ltd
Original Assignee
Robert Allan Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Allan Ltd filed Critical Robert Allan Ltd
Publication of AU2023202325A1 publication Critical patent/AU2023202325A1/en
Pending legal-status Critical Current

Links

Abstract

An apparatus, system, and method for transferring a line from a tug-boat to a vessel. One or more photosensors are trained on the vessel to detect a target position thereon. An upper manipulator is coupled to the line via an end-effector thereof and is configurable to position the end-effector relative to the target position to allow transfer of the line from the tug-boat to the vessel. One or more processors tracks the target position relative to the end-effector based on output from the photosensors. A lower manipulator is coupled to the tug-boat and the upper manipulator, and is configurable to orient the upper manipulator in accordance with an inertial frame. Computer-readable memory is coupled to the one or more processors and stores processor-executable instructions that, when executed, configure the one or more processors to execute one or more methods. x -> NC CN co ©l~

Description

x ->
NC CN co
©l~
SYSTEM AND METHOD FOR TUG-BOAT LINE TRANSFER TECHNICAL FIELD
The disclosure relates generally to ship handling and escorting, and more particularly to systems and method of transferring a line from a tug-boat to a marine vessel to facilitate towing by the tug-boat.
BACKGROUND
Marine vessels, such as large oil tankers, marine platforms, and unpowered barges, often need assistance in manoeuvring or traversing particular bodies of water, e.g. narrow canals or crowded harbours. A variety of tug-boats are used to assist such marine vessels.
Ship-handling (ship-assist) or harbour towage tug-boats assist large ships in harbour because such ships do not have sufficient steering control at low speeds. Salvage or rescue towing tug-boats assist ships in distress, e.g. by towing ships that have lost power, have run out of fuel, or cannot be operated due to fire or other circumstances. Escort tugs are used to provide steering and braking functions to oil tankers, cargo ships, bulk carriers, and other such marine vessels, and operate at higher speeds. Such tugs are an important component in reducing the risk of oil spills and ecological damage due to such oil spills. Tug-boats may also be used to tow unpowered barges between coastal ports and/or tow offshore oil platforms and wind turbine platforms to locations farfrom the coast. The latter type of tug-boat may more commonly encounter rough weather and sea conditions. In some cases, multiple tug-boats may tow a single marine vessel.
In many cases, tug-boats use towlines to tow marine vessels; the towlines couple the tug boats to the marine vessels to allow pulling thereof. In a typical operation for coupling a tug-boat and a marine vessel, the tug-boat first positions itself sufficiently proximal to the marine vessel to allow transfer of the towline from the tug-boat to the marine vessel. A crew member on the marine vessel then throws an end of a messenger line, or transfer cable, to an operator on the tug-boat while retaining the other end of the messenger line. The operator on the tug-boat attaches an end of the towline to the messenger line, while the other end of the towline is coupled to the tug-boat. The crew member then uses the other end of the messenger line to pull the end of the towline and the messenger line back together on to the marine vessel. To complete the coupling operation, the end of the towline is attached to the marine vessel, e.g. by engaging a ring or loop attached to the end of the towline with a bollard or tow hook of the marine vessel. In variants of this operation, the towline can be transferred from the marine vessel to the tug-boat, instead of vice versa.
Once coupled to the marine vessel, the tug-boat can pull the marine vessel by tensioning the towline to allow movement or manoeuvring of the marine vessel. Tensioning is typically accomplished by moving the tug-boat away from the marine vessel under engine power of the tugboat. In many cases, the towline is coupled to a winch, or other force leveraging device on the tug-boat. In this case, at least some tensioning can be achieved by operating the winch, or other force leveraging device to pull the towline away from the marine vessel.
Once towing operations are completed, the towline is de-attached from the marine vessel. A crew member on the marine vessel then attaches the towline to an end of the messenger line and allows it to be lowered back to the operator on the tug-boat. The operator removes the towline from the messenger line, which is then pulled back from the crew member.
Such processes for establishing a towline between a tug-boat and a marine vessel may be complex, carry a high risk of loss to life, limb, and property, and can be unreliable. The manual nature of operations involved in line transfer often requires multiple crew members on the tug-boat and/or the marine vessel. Collisions between marine vessels and tug-boats can have catastrophic consequences, particularly when the size and mass disparity between the marine vessel and the tug-boat is large, e.g. container ships can have a design length that is 10-20 longer than a typical harbour towage tug-boat. As such, tug-boat operations may be limited in rough weather and sea conditions. All these aspects increase costs and reduce efficiency.
Remotely-controlled, semi-autonomous, or autonomous tug-boats have been proposed to improve safety, e.g. fully unmanned surface vessels (USVs) with automatic mooring systems and/or automatic anchoring systems. Such tug-boats may be equipped with digital camera arrangements, RADAR, LiDAR, positioning systems (GPS), motion sensors for navigation and collision avoidance. The transfer of towlines to and from such remotely tug-boats is achieved by cranes or robot arms having a free end with a grasping tool that is used to hand towlines to a seaman on assisted vessels. An electronic control unit is configured to compensate for the movement of the tugboat in order to keep the free end of the crane arm substantially motion free relative to a global coordinate system. Alternatively, the electric control unit may be in receipt of a motion and/or position sensor on board of the assisted vessel and may keep the free end of the crane arm substantially motion free relative to the assisted marine vessel. The crane may have a telescopic element and may be provided with sensors, e.g. to sense the force/load acting on the crane.
Aerial drones have also been proposed for transferring lines from a tug-boat to a marine vessel. In rough sea conditions, the aerial drones may not be able to keep up with movements of the marine vessel and/or tug-boat. Large unwieldy drones may be needed to support the weight of the messenger line/towline, which increases with increasing distance of the tug-boat from the marine vessel (standoff distance).
In general, the cost, complexity, and risks associated with deploying (partially or completely) automated line transfer systems can stymie adoption. Among other things, improvements in the safety, reliability, robustness, flexibility, speed, level of automation, and/or operational envelope are desired.
SUMMARY
Uncrewed tug-boats may represent an opportunity to de-risk challenging operations and improve performance. The term uncrewed may refer to tug-boats operated remotely by trained personnel, or those with full or partial autonomy. In some cases, a remotely operated tug may be gradually adapted for more autonomy. A particular challenge to uncrewed tug-boat operations are operations to retrieve or transfer a line to an assisted vessel (marine vessel).
Bow-tug operations, in particular, present challenging conditions for a line transfer system (LTS) configured to transfer a line to or from (retrieval) an assisted vessel. Bow-tug operations involve tethering a tug-boat to the assisted ship's bow via a towline connected to an appropriate towing fitting on the ship's deck. Once tethered, the tug may execute a variety of escort and ship handling manoeuvres. Bow-tug operations have also traditionally been particularly hazardous for a tug-boat crew and thus crew reduction or operation from a distance is desirable.
The ability to safely operate tug-boats in a wide variety of sea conditions is important for reducing costs, e.g. by reducing supply chain backlogs, which may also reduce emissions from waiting ships. Autonomous, semi-autonomous, or remote operability may facilitate round-the-clock operation.
Potential concepts for safe line transfer include using drones, line-catching systems or manipulator-based arms (robots). Robots may be most robust in inclement weather, and most adaptable to purposes other than line transfers.
For safe operation, the towline may be robotically transferred from the tug-boat to the ship by the robot. However, the robot may undergo a variety of secondary motions due to ocean, wind, tug-boat, and ship dynamics, each of which may be associated with motion having one or more characteristic frequencies (or a characteristic ranges of frequencies). These frequencies are not necessarily the same. Robot actuators and sensors, additionally, may have their own characteristic frequencies. Compensating these various motions while directing the robot to transfer the line may be complex and challenging.
Additionally, the robot may use an end-effector to transfer the line. However, using a gripper on the end-effector can put undue stress on deck equipment that is in contact with line, e.g. a winch may be pulled upwards, and can damage the line itself. In some cases, use of gripper, may result in reduced reliability and undesirably high complexity. Crew members handling the towline may be pulled by the robot via the line, when the crew members first reach out to receive the towline from the robot for attaching to the vessel.
In one aspect, a computer-implemented method of transferring a line from a tug-boat to a vessel is described. The method includes (a) configuring a lower manipulator coupled to the tug-boat to orient an upper manipulator in accordance with an inertial frame; (b) tracking, by one or more processors, a target position on the vessel relative to an end effector of the upper manipulator based on output of one or more photosensors trained on the vessel; and (c) configuring the upper manipulator to position the end-effector relative to the target position, the end-effector coupled to the line to allow transfer of the line from the tug-boat to the vessel.
In some aspects, the step (a) may include receiving measurements from an inertial measurement unit (IMU). The IMU may include one or more accelerometer(s), gyroscopic instrument(s), and/or other device(s) to measure angular and linear accelerations. The measurements may be indicative of accelerations of the lower manipulator measured relative to an inertial frame.
As referred to herein, "inertial frame" includes reference frames that are approximately inertial, e.g. earth-fixed frames, or inertial relative to the lower manipulator. In some cases, the inertial frame may be linearly accelerating or rotating with respective linear acceleration(s) and/or angular velocities and accelerations much lower than linear acceleration(s) and/or angular velocities and/or accelerations of the lower manipulator, e.g. enough to render the accelerations or rotations of the inertial frame negligible relative to accelerations or rotations of the lower manipulator.
In some aspects, the step (a) may include maintaining angular velocities and accelerations of a base of the upper manipulator substantially similar to corresponding rotational velocities and accelerations of the inertial frame, e.g. rotational velocities and accelerations of the earth-fixed frame.
In some aspects, the step (a) may include maintaining a base of the upper manipulator substantially motionless relative to the inertial frame. In various embodiments, the inertial frame may be a constant velocity frame.
In some aspects, the step (a) may include orienting the upper manipulator by rotating or orienting a base of the upper manipulator.
In some aspects, the step (b) may include determining the target position by processing the output of the one or more photosensors, including to identify a target. In some embodiments, the step (b) may include determining a target. In various embodiments, the target may be an object on a deck of the vessel, a seaman on the deck of the vessel, or a portion of a hull of the vessel exposed to the photosensor.
In some aspects, step (b) may include processing the output in accordance with a machine vision model to detect a target. In various embodiments, the machine vision model may be a supervised machine learning model. In various embodiments, the supervised machine learning model may be trained to detect targets in images.
In some aspects, the step (b) may include correlating the output to an earlier output to determine (relative) movement of the vessel between a time associated with the output and an earlier time associated with the earlier output. In some embodiments, tracking a target position may include determining coordinates of a target at a single time, at successive times, or at a single time with respect to an earlier time to the single time.
In some aspects, the step (b) may include determining three-dimensional coordinates of the target position.
In some aspects, the target position relative to the end-effector may be a relative change of the target position between a time associated with the output and an earlier time associated with an earlier output of the one or more photosensors.
In some aspects, the step (b) may include determining a separation vector (a vectorial distance) between the target position and the end-effector. In various embodiments, the separation vector may be a three-dimensional vector.
In some aspects, the step (b) may include determining a distance between the target position and the end-effector.
In some aspects, the output of the one or more photosensors may be processed to generate a three-dimensional representation, including of a target on the marine vessel. In some embodiments, the one or more photosensors may be coupled to (or used in conjunction) with one or more light sources for LIDAR. In some embodiments, the one or more light sources may be coherent light sources (LASER(s)). In some embodiments, the output may include data indicative of reflected light for use in LIDAR. In some embodiments, the one or more photosensors may include a plurality of optical sensors configured for stereophotography or 3D vision to determine distances of objects in images generated by the plurality of optical sensors.
In some aspects, step (c) may include configuring the upper manipulator separately or independently from the lower manipulator.
In some aspects, the line may be at least one of a towline or a messenger line.
In some aspects, the end-effector may hold the line to allow transfer of the line from the tug-boat to the vessel.
In some aspects, the line may be a towline, and the end-effector may be coupled to the towline via a connector line. In various embodiments, the end-effector may be attached to a first end of the connector line, a second end of the connector line being captively engaged with the towline for sliding of the second end along the towline to allow the end effector to pull the towline via the connector line. In various embodiments, the second end may abut against a retainer attached to the towline while the end-effector is pulling the towline to prevent disengagement of the towline from the connector line.
In some aspects, configuring the lower manipulator or the upper manipulator may include actuating one or more actuators of, respectively, the lower manipulator or the upper manipulator. Actuation of actuators may include transmitting a signal to the actuators and/or receiving data indicative of a state of the actuators. The data may be received from an encoder or transducer.
In some aspects, the lower manipulator may be a parallel manipulator and/or the upper manipulator may be a serial manipulator or articulated robot.
In some aspects, the upper manipulator may be rotatably coupled to the lower manipulator. In some embodiments, a coupler of the upper manipulator may be engaged with a rotating component aligned for rotation about a vertical axis and drivably coupled to a motor. The rotating component may include a sprocket, one or more gears, and/or a toothed belt (synchronous belt).
In some aspects, the upper manipulator may include at least two links pivotably coupled to each other. A first link may be coupled to a rotating coupler connected to the lower manipulator. A second link may be pivotably coupled to the first link.
In some aspects, the upper manipulator may include a telescoping boom. In some embodiments, the end-effector may be pivotably coupled to an (upper) end of the telescoping boom.
In some aspects, the photosensor may be mounted on the end-effector or on the telescoping boom adjacent to the end-effector.
In some aspects, the photosensor may be rotatably and/or translatably mounted on the end-effector to allow positioning of the photosensor to train the photosensor on the vessel.
In one aspect, a non-transitory computer-readable medium is described. The non transitory computer-readable medium may have stored thereon machine interpretable instructions which, when executed by one or more processors, cause the one or more processors to perform one or more of the computer-implemented methods described previously.
In one aspect, a system for transferring a line from a tug-boat to a vessel is described. The system includes a lower manipulator mounted on to the tug-boat; an upper manipulator defining an end-effector coupled to the line, the upper manipulator being coupled to the lower manipulator for orientation of the upper manipulator by the lower manipulator; one or more photosensors trained on the vessel to detect a target position on the vessel; one or more processors connected to the one or more photosensors to receive output from the one or more photosensors indicative of the target position, the lower manipulator and the upper manipulator being actuatably connected to the one or more processors; computer-readable memory coupled to the one or more processors and storing processor-executable instructions that, when executed, configure the one or more processors to cause: (a) configuring of the lower manipulator to orient the upper manipulator in accordance with an inertial frame, (b) determining, by the one or more processors, a separation vector between the target position and the end-effector based on the output of the one or more photosensors, and (c) configuring of the upper manipulator based on the separation vector to position the end-effector in proximity of the vessel to allow transfer of the line from the tug-boat to the vessel.
In one aspect, there is described an apparatus for transferring a line from a tug-boat to a vessel. The apparatus includes one or more photosensors configured to be trained on the vessel to detect a target position on the vessel; an upper manipulator defining an end effector configured to be coupled to the line, the upper manipulator being configurable to position the end-effector relative to the target position to allow transfer of the line from the tug-boat to the vessel; and a lower manipulator coupled to the tug-boat and the upper manipulator, the lower manipulator being configurable to orient the upper manipulator in accordance with an inertial frame.
Embodiments can include combinations of the above features. It is understood that labelled steps of methods do not necessarily imply a particular sequence of steps to be taken.
Further details of these and other aspects of the subject matter of this application will be apparent from the detailed description included below and the drawings.
DESCRIPTION OF THE DRAWINGS
Reference is now made to the accompanying drawings, in which:
FIG. 1A is a side elevation of a tug-boat with a line transfer system for transferring a line to/from the tug-boat to a vessel, in accordance with an embodiment;
FIG. 1B is an enlarged view of region 1B in FIG. 1A, in accordance with an embodiment;
FIG. 2A is a perspective view of a deck of the tug-boat showing the line during a deployment phase of a line transfer procedure, in accordance with an embodiment;
FIG. 2B is a perspective view of a deck of the vessel showing the line during an operational phase of the line transfer procedure, in accordance with an embodiment;
FIG. 3A is a side elevation view of an upper manipulator coupled to a lower manipulator and in a stowed configuration, in accordance with an embodiment;
FIG. 3B is a side elevation view of the upper manipulator coupled to the lower manipulator and in a partially extended configuration, in accordance with an embodiment;
FIG. 3C is a side elevation view of the upper manipulator coupled to the lower manipulator and in a fully extended configuration, in accordance with an embodiment;
FIG. 3D is a plan view of the upper manipulator coupled to the lower manipulator and in a fully extended configuration, in accordance with an embodiment;
FIG. 4A is a perspective view of the lower manipulator, in accordance with an embodiment;
FIG. 4B is a front elevation view of the lower manipulator, in accordance with an embodiment;
FIG. 5 is a schematic block diagram of a control system of the line transfer system, in accordance with an embodiment;
FIG. 6 is a schematic block diagram of a lower manipulator control system of the line transfer system, in accordance with an embodiment;
FIG. 7 is a schematic block diagram of an upper manipulator controller system of the line transfer system, in accordance with an embodiment;
FIG. 8 illustrates a block diagram of a computing device, in accordance with an embodiment of the present application; and
FIG. 9 is a flow chart of a computer-implemented method of transferring a line from a tug boat to a vessel.
DETAILED DESCRIPTION
The following disclosure relates to tug-boats and line transfer systems for such tug-boats. In some embodiments, the systems and methods disclosed herein can facilitate safer and more reliable line transfer compared to existing line transfer systems.
In various embodiments, a line transfer system (LTS) may generally include some or all of a retractable arm mounted to the tug-boat's deck; a set of actuators, joints, and/or linkages to allow the arm to compensate for relative motion between the tug-boat and ship; a set of sensors and controllers to achieve motion compensation; a human-machine interface, e.g. including a graphical user interface, to allow a remote operator to control the arm; an end-effector that allows the arm to retain the towline; and an interface to the tug-boat's winch to coordinate the extension/retraction of the arm with spooling out/in the towline.
In various embodiments, the LTS may include a crane having a telescopic boom, and coupled to a motion compensation device, such as a Stewart platform. Stewart platforms may be platforms similar to those used in flight-simulators and some motion compensated offshore crew transfer gangways.
In various embodiments, it is found to be desirable to compensate for the tug-boat's motions (also referred to as tug motions) as low down in the kinematic chain as possible, hence locating the primary compensation device (the Stewart platform) at the base.
In various embodiments, an LTS system may be used for line transfer and retrieval, with components thereof being stowed away during other phases of operation, e.g. during towing or manoeuvring of the assisted vessel. In various embodiments, the LTS system may use active motion compensation during line transfer and retrieval. In various cases, line transfer and retrieval operations may take place while the assisted vessel is underway or docked. In various embodiments, an LTS capable of conducting line transfer while a marine vessel is underway may also be capable of conducting such operations for a docked ship.
A towline is connected to an appropriate towing fitting on the assisted marine vessel's deck. The connection point on the assisted vessel may also be a recessed bit.
During an approach phase, a tug-boat may approach an assisted marine vessel's bow, match speed therewith and position itself to complete line transfer. For example, in some embodiments, particularly for bow-tug transfers, the position may be directly ahead of the ship or at the port/starboard bow position.
The tug-boat may be configured for station-keeping once positioned, i.e. to maintain a substantially constant position relative to the marine vessel, while underway or while stationary. In various embodiments, station-keeping may be relatively imperfect. For example, station-keeping may maintain a position relative to the tug-boat within a predetermined tolerance, in terms of distance between the assisted vessel and the tug boat (safe working distance or tug/ship separation). In various embodiments, particularly in significant or rough sea conditions, the relative motion of tug-boat and marine vessel may cause faster and larger amplitude motions of the tug-boat relative to the marine vessel compared to the deviations attributable due to imperfect station-keeping. In various embodiments, the station-keeping ability of the tug-boat may be an important factor for the LTS as the LTS may be able to, or alternatively be required to, articulate parts of itself to correct for deviations of the tug-boat's position.
During a line transfer phase, the LTS may extend or articulate parts of itself to bring the towline, or a (lighter) messenger line attached to the end of the towline, above the marine vessel's deck where it can be grabbed or hooked by deck crew of the marine vessel. Simultaneously, the winch may pay out sufficient line to enable parts of the LTS to move, to allow extension and articulation, while mitigating excess slack. The deck crew may then fix the towline to an appropriate towing fitting on the marine vessel.
Once the line transfer phase is complete, part of the LTS may retract to return the LTS to its stowed position.
During a line retrieval phase, the LTS may extend or articulate to a position above the marine vessel's bow. The marine vessel's deck crew may then release the line from the towing fitting on the marine vessel. The LTS may then retract to its stowed position while the winch manages the line slack.
During all phases of operation, the LTS may be configured maintain adequate separation from the marine vessel to avoid the risk of contacting the marine vessel.
In some embodiments, the tug-boat may be controlled by a human remote operator (master). In some embodiments, the LTS may also be partially controlled by a human remote operator, either the master or a second person depending on the total cognitive load of operating both the tug-boat and LTS. Operation may be achieved via a human control interface. In some embodiments, the LTS user inputs may directly control the translational degrees of freedom (x,y,z) of the LTS end effector. In various embodiments, the LTS may automatically compensate for motions induced by wind, waves, and imperfect station keeping. In various embodiments, compensation may be facilitated sensing the tug-boat's motion relative to a fixed frame of reference (earth or inertial frame) and sensing the end-effector position relative to the marine vessel's deck.
In some embodiments, control of the LTS may be a hybrid of human inputs and automated stabilization. A remote human operator may use visual feedback from cameras on the tug-boat and/or end-effector to actively control a set-point position (x,y,z coordinates). In this manner, the human operator may be able to deploy the end-effector to an appropriate position above the marine vessel's deck with minimal cognitive effort. In various embodiments, the manipulation of individual joints within the LTS may be automated to track the position set-point efficiently and to remove wave/wind induced motions to stabilize the end-effector.
In some embodiments, once the operator has caused positioning of the end-effector at an appropriate location above the marine vessel's deck, a 'lock-on' command may be issued (or generated or transmitted), which may cause transferring of control of the (x,y,z) set-point to a vision-based controller. In various embodiments, the vision-based controller may use visual or other electromagnetic input data to track movements of the end-effector position relative to the marine vessel's deck and manipulate a long-arm (e.g. as part of an upper manipulator) to maintain the locked-on position. In this manner, when locked on, the end-effector may follow wave-induced movements of the marine vessel.
In various embodiments, sensing the tug-boat's accelerations, velocities, orientation and seakeeping movements may be achieved via accelerometer and inclinometer (inertial measurement unit IMU) technologies. In some embodiments, tracking the marine vessel's movements may be achieved via a vision or other electromagnetic sensing system.
In various embodiments, the motion compensation method may be based on separation of tasks between the Stewart platform and the crane. The Stewart platform may compensate for the tug-boat's angular motions relative to earth's frame of reference (roll, pitch and yaw), whereas the crane component may compensate for the relative translational motion between the end-effector and a target location above the assisted marine vessel's deck.
In various embodiments, methods of motion compensation may require computing inverse kinematics, e.g. the required joint displacements (whether angular or translational) to achieve the desired motion. For the Stewart platform, the desired motion may be to maintain a level surface (also aligned in yaw with the course), but free to translate with a tug-boat's surge, sway and heave motions. In various embodiments, the Stewart platform may be closed-chain (or parallel) manipulator, and its inverse kinematics may have a (unique) closed form solution. Its forward kinematics (i.e. the platform position and orientation as a function of thepiston lengths) may have multiple solutions. In various embodiments, the crane is an open-chain (or serial) manipulator with unique and closed form forward kinematics, and with inverse kinematics that can be solved readily either by computing the system Jacobian by finite differencing, or using an optimization algorithm.
In various example embodiments, the LTS may be configured to achieve one or more of the functional parameters in TABLE 1. For example, the LTS reach requirements may be suitable to allow operations with ships ranging in size up to Neopanamax (200 000 tonnes, length 366m, vertical reach 25m). The performance characteristics and functional parameters may be suitable for bow-tug operations within, and potentially outside, current standard operational envelopes for such operations.
TABLE 1
Description Range Notes
A maximum of 6 Ship Forward speed 0-8 knots knots in some cases
Safe Ahead (off ship bow) >5m working Lateral (port/starboard) >5m distance
Vertical > 25 m Simultaneously
LTS reach Horizontal > 20 m achievable reach
Slewing ±170 360 in some cases
End effector Vertical, relative to ship deck < 50 cm position tolerance Horizontal, relative to ship deck < 50 cm
Extension/retraction time < 120s
Operational Towline diameter Up to 8.0 cm
Towline weight Up to 3.6 kg/m
Advantageously, the LTS may be configured to operation in a variety of environmental factors. For example, operations may be conducted during inclement weather including poor visibility, freezing spray, strong winds and various sea states. Such environmental factors may cause tug-boat and marine vessel motions that the LTS may need to be able to compensate.
In various example embodiments, the LTS may be configured to achieve effective operation at least under the environmental factors listed in TABLE 2. For example, wind may have associated windage loads and effects on tug motion that may be accounted for by the LTS; and wave-induced motions may be compensated by the LTS.
TABLE 2
Description Range
Wind speed 0-30 knots
Wind direction 0-360° (any direction)
Wave significant height 0-2 m
Wave peak period 5-12s
Wave relative heading 0-360°
Ambient temperature -30 °C to 45 0C
Aspects of various embodiments are described in relation to the figures.
FIG. 1A is a side elevation of a tug-boat 10 with a line transfer system or LTS 12 for transferring a line 14 to from the tug-boat 10 to a vessel 16, in accordance with an embodiment.
FIG. 1B isanenlarged viewof region 1B in FIG. 1A, in accordancewith anembodiment.
In some cases, the LTS 12 may allow the tug-boat to maintain a standoff distance 4 from the vessel 16 between 7 to 10 m, and may extend up to a height 8 from a deck of the tug boat 10 between 20 to 40 m, while the line 14 is being transferred from the tug-boat 10 to the vessel 16. For example, the vessel 16 may have above-water height 8 of less than m.
The LTS 12 may include a lower manipulator 18 and an upper manipulator 20. In various embodiments, the lower manipulator 18 may be a parallel manipulator and/or the upper manipulator 20 may be a serial manipulator or articulated robot.
Parallel manipulators may refer to mechanical systems that use several links arranged (functionally) in parallel or partially in parallel to support a single platform, or end-effector. To achieve parallelism of the links, at least two independent links of the links, or at least two independent chains of links, may be coupled directly to the single platform. The two links, or two chains of links, may be actuated independently to achieve articulated motion of the single platform or end-effector. For example, a Stewart platform or the Gough Stewart platform is a parallel manipulator that comprises six linear actuators supporting a movable (or orientable) base.
Serial manipulators may refer to manipulators comprising a series of links connected by motor-actuated joints that extend from a base to an end-effector.
Parallel manipulators and series manipulators may also refer to manipulators that are, respectively, partially parallel manipulators and partially series manipulators.
The lower manipulator 18 may be coupled to the tug-boat 10 and the upper manipulator 20. For example, the lower manipulator 18 may be affixed to the tug-boat 10 by means of fasteners, such as bolts, including to prevent relative motion between the lower manipulator 18 and the tug-boat 10. The lower manipulator 18 may be configurable to orient the upper manipulator 20 in accordance with an inertial frame 22. In some cases, this orientation may be accompanied by translation. In some embodiments, the lower manipulator 18 may be configurable to orient, in a (substantially) translation-free manner, the upper manipulator 20 in accordance with an inertial frame 22.
In FIG. 1A, the inertial frame 22 is illustrated by means of a cartesian coordinate system, e.g. a local Euclidean projection of a manifold defined by the Earth's surface. However, it is understood that the inertial frame 22 may be generally defined independently of a particular coordinate system, e.g. a spherical coordinate system, or other coordinate systems may be equally applicable.
In various embodiments, the lower manipulator 18 may be so configured based on measurements from an inertial measurement unit or IMU 40. In various embodiments, the IMU 40 may include one or more accelerometer(s), gyroscopic instrument(s), and/or other device(s) to measure angular and/or linear acceleration(s). The measurements may be indicative of accelerations of the lower manipulator 18 measured relative to the inertial frame 22.
In some embodiments, angular velocities and accelerations, with respect to the inertial frame 22, of a base 42 of the upper manipulator 20 used to couple the upper manipulator to the lower manipulator 18 (or support the former by the latter) may be maintained substantially similar to corresponding angular velocities and accelerations of the inertial frame 22. For example, angular velocities and accelerations may be defined with respect to Euler angles or with respect to the body itself (body axes). In some embodiments, the base 42 of the upper manipulator 20 may be maintained substantially motionless relative to the inertial frame 22 by the lower manipulator 18.
In some embodiments, the upper manipulator may be at least partially orientable by rotating or orienting the base 42 of the upper manipulator. The upper manipulator may be rotatably coupled to the lower manipulator. In some embodiments, the base 42 of the upper manipulator 20 may refer to a portion allowing coupling between the upper manipulator 20 and the lower manipulator 18 and which is stationary relative to the lower manipulator 18, even though base 42 may be structurally integrated with (or in unitary construction with) the lower manipulator 18. For example, the upper manipulator 20 may be rotatably coupled to the lower manipulator 18 by a generally circular body, e.g. a sprocket or gear, extending from the upper manipulator 20 and engaging with a complementary (circular) body extending from the lower manipulator 18.
In some embodiments, a coupler 44 of the upper manipulator may be engaged with a rotating component aligned for rotation about a vertical axis and drivably coupled to a motor. The rotating component may include a sprocket, one or more gears, and/or a toothed belt (synchronous belt). In some embodiments, the base 42 may be part of the coupler 44.
The inertial frame 22 may refer to an earth-centric frame or a frame of reference moving substantially with the tug-boat 10, e.g. if transfer of the line 14 occurs at speed.
The upper manipulator 20 may define an end-effector 24 configured to be coupled to the line 14. The upper manipulator 20 may be configurable to position the end-effector 24. The upper manipulator 20 may be configurable separately or independently from the lower manipulator 18, and vice-versa.
Configuring the lower manipulator 18 or the upper manipulator 20 may include actuating one or more actuators of, respectively, the lower manipulator 18 or the upper manipulator 20. In various embodiments, actuation of actuators may include transmitting a signal to the actuators and/or receiving data indicative of a state of the actuators. The data may be received from an encoder or transducer.
In some embodiments, the upper manipulator 20 may include at least two links pivotably coupled to each other. A first link 46 may be coupled to the rotating coupler 44 connected to the lower manipulator 18. A second link 48 may be pivotably coupled to the first link 46. For example, the first link 46 and the second link 48 may be coupled by a revolute joint allowing rotation in at least one direction.
In some embodiments, upper manipulator 20 may include a telescoping boom 50. In some embodiments, the end-effector 24 may be pivotably coupled to an (upper) end of the telescoping boom 50. The telescoping boom 50 may comprise a plurality of slidably coupled links that are nested within each other and coupled to each other via a series of prismatic joints. The end-effector 24 may be coupled to an end of the telescoping boom opposite to an end defined by the second link 48, which is coupled to the first link 46.
A photosensor device 26 of the LTS 12 may comprise one or more photosensors. configured to be trained on the vessel 16 to detect a target position on the vessel 16. In various embodiments, the photosensor device 26 may be mounted on the end-effector 24 (including via mounts mounted on the end-effector 24) or on the telescoping boom 50 adjacent to the end-effector 24. In some embodiments, the photosensor device 26 may be (e.g. controllably) rotatably and/or translatably mounted on the end-effector 24 to allow positioning of the photosensor device 26 to train the photosensor on the vessel 16.
In various embodiments, the photosensor device 26 may include one or more of a LIDAR device, a 3D vision device such as an arrangement of a plurality of cameras configured for stereovision or stereoimaging or a single camera with a 3D vision machine learning model, an infrared camera such as a thermal infrared camera, or other electromagnetic sensor. For example, the photosensor device 26 equipped with a LIDAR device may be configured to generate a point cloud of distances and/or coordinates, from which objects may be detected. In various embodiments, stereovision systems may include two or more cameras configured to compute a 3D map/depth map of a scene commonly captured by the stereovision from different angles/positions/distances.
For example, the photosensor device 26 may be coupled to the end-effector 24 via a camera manipulator 38 to allow training of the one or more photosensors on the vessel 16. In some embodiments, the camera manipulator 38 may be connected to one or more processors. In some embodiments, the one or more processors may control the camera manipulator 38 to train the photosensor device 26 on to the vessel 16.
In some embodiments, the photosensor device 26 may be mounted without a camera manipulator, e.g. the photosensor device 26 may be positioned (or simply oriented) by movement of the end-effector 24.
In various embodiments, the target position may be the physical location of a target, such as a deck 28 (or objects thereon), a hull 30 (or a portion thereof exposed to the photosensor), bollards 32, fairleads 34, a seaman 36, and/or other objects on the vessel 16 or the deck 28 thereof.
In some embodiments, the target position may refer to everything in a photosensor's view (field of view, FoV, field of vision, or region of capture), e.g. everything in a photosensor's view that is common between photodetections (such as image captures, or LIDAR detections) at two or more separate and/or successive times.
The upper manipulator 20 may be configurable to position the end-effector 24 relative to the target position to allow transfer of the line 14 from the tug-boat 10 to the vessel 16. The end-effector 24 may hold the 14 line to allow transfer of the line 14 from the tug-boat to the vessel 16.
In various embodiments, the line 14 may be at least one of a towline or a messenger line. For example, a messenger line may be transferred from the vessel 16 to the tug-boat 10 and may be used to pull the towline between the tug-boat 10 and the vessel 16 by attaching to the towline. A messenger line may refer to a line that is attached to an end of another line to allow movement of the other line by the messenger line.The line 14 may be coupled to a winch 52 on the tug-boat 10 via an escort staple 54.
In some embodiments, the end-effector 24 may be coupled to the line 14 via a connector line 56. For example, the connector line 56 may be cable, cord, rope, wire, strand, or other type of line adapted for adequate strength and flexibility. The end-effector may be attached to a first end 58 of the connector line 56. The connector line 56 may be coupled to a pulley 57. A second end 60 of the connector line 56 may be captively engaged with the line 14 for sliding of the second end 60 along the line 14 to allow the end-effector 24 to pull the line 14 via the connector line 56. For example, the second end 60 may include an eye 62 that is engaged with the line 14 to allow the line 14 to move at least partially freely through an opening of the eye 62.
The second end 60 (or the eye 62 thereof) may abut a retainer 64 attached to the line 14 while the end-effector 24 is pulling the line 14, via the connector line 56, to prevent disengagement of the line 14 from the connector line 56. For example, the retainer 64 may be a stop that prevents disengagement by stopping the line 14 from passing through the eye 62 beyond the stop. In some embodiments, the retainer 64 may be a sphere with a diameter larger than a diameter of the eye 62, thereby preventing the line 14 (attached to the eye 62) from completely passing through and out of the eye 62 at the second end 60. Thus, the line 14 may always remain engaged with the eye 62 during pulling of the line 14 by the end-effector 24.
The line 14 may define a loop 66 at a free-end of the line 14. The loop 66 may be used to engage the line 14 with a bollard of the tug-boat 10.
FIG. 2A is a perspective view of a deck 28 of the tug-boat 10 showing the line 14 during a deployment phase of a line transfer procedure, in accordance with an embodiment.
FIG. 2B is a perspective view of the deck 28 of the vessel 16 showing the line 14 during an operational phase of the line transfer procedure, in accordance with an embodiment.
During the deployment phase of the line transfer procedure, the tug-boat 10 may be brought near or proximal to the vessel 16. In various embodiments, the tug-boat 10 may be operable autonomously, semi-autonomously, and/or remotely to achieve proximity to the vessel 16. In various embodiments, the tug-boat 10 may be positioned at least a safe working distance away from the vessel 16, e.g. greater than 5 m. In various embodiments, the tug-boat 10 may be positioned such that the LTS 12 is capable of being configured (or configurable) to allow the end-effector 24 to be positioned to allow line transfer, e.g. adjacent to or above the deck 28 of the vessel 16. The tug-boat 10 may be positioned to allow extension of the LTS 12 over the vessel 16. In some embodiments, the tug-boat 10 may include one or more proximity sensors, e.g. a radar-based sensor.
In various embodiments, the LTS 12 may remain in a stowed configuration (or position) during the deployment phase.
The connector line 56 may be coupled (e.g. captively coupled to slide along the line 14) to the line 14 by drawing the line 14 through the eye 62. Advantageously, the connector line 56 may remain coupled to the line 14 during the deployment phase, e.g. regripping or attachment may not be required. The connector line 56 may be flexible and may be capable of sustaining tension to allow relative movement between the end-effector 24 and the line 14. Furthermore, relative sliding movement between the connector line 56 and the line 14 may be achieved via the eye 62. For example, relative motion between the line 14 and the connector line 56 may reduce a risk of breakage or stress on the line 14, connector line 56, and components connected thereto, as the tug-boat 10 is manoeuvring, the LTS 12 is changing orientation or otherwise moving (including accidently), and/or the line 14 or the connector line 56 is undergoing motions.
As will be described later, when the LTS 12 is fully extended, the upper manipulator 20 may, depending on actuators used, reconfigure at a frequency different than a frequency at which the lower manipulator 18 reconfigures, e.g. in response to (feedback) detections via the photosensor device 26. Advantageously, relative motion allowed between the end effector 24 and the line 14 via the connector line 56 may prevent fatigue of tensioned lines and reduce the chance of catastrophic accidents, e.g. the line 14 being pulled apart by the end-effector 24.
After the tug-boat 10 is positioned proximal to the vessel 16, the LTS 12 may be extended to allow training of the photosensor device 26 on to the deck 28 of the vessel 16. In various embodiments, the extension and training of the photosensor device 26 may be performed autonomously, semi-autonomously, or by remote operation by a user. The LTS 12 may be extended by extension of the telescoping boom 50, and/or rotation of the first link 46, the second link 48, and/or the end-effector 24.
In some embodiments, the LTS 12 may include one or more proximity sensors positioned on the body of the LTS 12, e.g. on the end-effector 24, the telescoping boom 50, the first link 46, and/or the second link 48. The proximity sensor(s) may detect when an object is closer than a predetermined threshold, e.g. 5 m or lower, thereto. For example, collisions between the LTS 12 (and/or other components having proximity sensors) and the vessel 16 may be avoided or mitigated.
Proximity sensors may transmit data to one or more controller(s) 504. The one or more controller(s) 504 may control the LTS 12, or components thereof, and/or the tug-boat 10 to avoid collisions that could damage the vessel 16, the LTS 12 and/or the tug-boat 10. In some embodiments, the controller(s) 504 may sound an alarm. In some embodiments, the proximity sensors may be coupled directly to a sound generation system to generate an alarm.
In various embodiments, the camera manipulator 38 (or camera mount, e.g. pan-tilt mount) may be controllably coupled to the controller(s) 504 to allow orientation and/or translation of the photosensor device 26 by reconfigurations of the camera manipulator 38. In some embodiments, the controller(s) 504 may be operable to orient the photosensor device 26 so as to enable training of the photosensor device 26 on the vessel 16, e.g. the deck 28 of the vessel 16, the hull 30 of the vessel 16, or other (distinguishable) parts of the vessel 16.
In various embodiments, training photosensors and/or training the photosensor device 26 may include orienting photosensors of the device such that the field of view of the photosensors includes the vessel 16 and/or a distinguishable part thereof, such as the deck 28, or objects on the deck 28.
In various embodiments, photosensor(s) may be trained on one or more targets. In various embodiments, the target may be the vessel 16 itself or some portion of the vessel 16. In some embodiments, the target may be or may include a marking (e.g. a bright marking) on the deck 28, fairleads 34, bollards 32, a seaman 36, an edge of the deck 28, a surface of the deck 28, a side of the hull 30, one or more lights on the vessel 16 and/or the deck 28 thereof, and/or an outline or outer seam of the vessel 16. In some embodiments, the target may be moving. The target may have a specific position (target position), e.g. three-dimensional coordinates, that may be defined relative to the photosensors of the photosensor device 26. The target position may be changing, in general, e.g. due to movement of the target relative to the inertial frame and the vessel 16, and dynamic motions of the vessel 16 and the tug-boat 10, e.g. due to sea conditions, wind, and/or self-propulsion.
During the operational phase, one or more processors of the controller(s) 504 may track the target position on the vessel 16 relative to an end-effector 24 based on output of the photosensors. Feedback from the photosensors may be provided to the controller(s) 504. The feedback from the photosensors from the photosensor device 26 may be used, e.g. by the controller(s) 504, to determine the relative distance between a target position on the vessel 16 and the end-effector 24. For example, LIDAR distances may be used as input to a system to determine the relative distance (e.g. a vectorial distance) between the photosensor and the target on the vessel 16.
In some embodiments, a user may trigger the operational phase. Such a triggering may switch on the photosensor device 26 and/or a machine vision system.
In various embodiments, tracking of the target position may include one or more of determining, determining (repeatedly) at a predetermined frequency, determining at prescribed or predetermined times, determining based on detected movements of the target, or substantially continuously determining, of a position of the target. A position may include a vectorial position.
The upper manipulator 20 may be configured based on the relative distance to keep a predetermined distance or distance relationship (such as a function of time or other variables) between the target position and the end-effector 24. Having such a predetermined distance or distance relationship may facilitate transfer of the line 14. For example, the error or deviation between the measure relative distance and the predetermined distance or distance relationship may be initially large.
In various embodiments, the predetermined distance or distance relationship may be prescribed so as to cause the end-effector 24 and/or a free end of the line 14 (distal from tug-boat 10 and proximal to the vessel 16), such as the loop 66, to come into proximity of the seaman 36 to allow the seaman 36 to engage the line with the vessel 16, such as via the fairleads 34 and/or the bollards 32. In various embodiments, the predetermined distance or distance relationship may be based on a length of the line 14, the loop 66, and/or the connector line 56.
While the upper manipulator 20 is being configured to maintain a constant distance between the target position and the end-effector 24, the camera manipulator 38 may be configured to remain trained on the target position, e.g. by compensating for movements of the end-effector 24, the telescoping boom 50, the first link 46, and/or the second link 48, and/or other components of the LTS 12.
In some embodiments, the photosensor device 26 may be trained on the vessel 16 autonomously, semi-autonomously, or remotely by a user.
As the LTS 12 moves from a stowed position to an extended position, the end-effector 24 generally moves away from the tug-boat 10 and the winch 52. As the end-effector 24 moves away from the tug-boat 10, the connector line 56 generally moves away from the tug-boat 10. As it does so, the eye 62 slides captively along the line 14, causing the connector line 56 to slidably move along the line 14 via slidable engagement of the eye 62, until the eye 62 hits or abuts the retainer 64. When the eye 62 abuts the retainer 64, the line 14, and particularly the loop 66, is moved away (or lifted up and away) from the tug-boat 10 towards the vessel 16.
As the line 14 is being drawn up and away from the tug-boat 10, the line 14 is tensioned against a corner of the staple 54 to apply a force thereto. The staple 54 may be a structure disposed on the tug-boat 10 disposed between the winch 52 and the vessel 16 and configured to captively retain a portion of the line 14, e.g. under a lower portion of the staple 54. Advantageously, this may allow the line to be guided and may prevent the line from coming off the winch 52 at an angle, which may cause damage or injury. In various embodiments, the staple 54 may include an attachment portion for attaching to the tug boat 10 while allowing the line 14 to pass (or not obstructing the line 14 as it passes) under the staple 54. In various embodiments, the staple 54 may include an elongated portion extending substantially parallel to the tug-boat 10 above a deck thereof away from the attachment portion, the line abutting against the elongated portion.
The relative movement between the connector line 56 and the line 14, as well as the relative movement between the end-effector 24 and the connector line 56 may be advantageous. For example, in some cases, the movement or response of the second end 60 as the end-effector 24 reconfigures in response to the target position may be relatively slow, damped, or delayed (compared to the movement of the end-effector 24), mitigating jerkiness in the loop 66 portion. For example, in some cases, handling of the loop 66 by a user may be easier. Furthermore, in some cases, safety may be improved, e.g. injury to the user by movement of the end-effector 24 may be mitigated.
During both the deployment and operational phases, the lower manipulator 18 may be configured to keep the upper manipulator 20 relatively stationary with respect to the inertial frame, or in relatively fixed orientation with respect to the inertial frame, particularly during manipulation of the upper manipulator 20.
Having separate manipulators, the lower manipulator 18 and the upper manipulator 20, may be advantageous. For example, the time-scale of dynamics of the tug-boat 10 may be different than the time-scale of the dynamics of the vessel 16, particularly in rough seas. The vibrational dynamics of the upper manipulator 20 may have a separate time scale. The actuators of the lower manipulator 18 and the upper manipulator 20 may have different feasible envelopes for actuation (rates) and may have different capabilities with respect to precision (of positioning, motion, and/or movement). The sampling, sensing, or detecting frequency of the inertial measurement unit (IMU) and the machine vision system coupled to the photosensor device 26 may be different, e.g. characteristic frequencies associated with these components may be separated by several orders of magnitude. Under the disparate time-scales imposed by constraints such as sea conditions, sensor technology, and the internal dynamics associated with vibrations of the upper manipulator 20, it is found that effective line transfer may be achieved by advantageously by separating the motion compensation based on the IMU from positioning of the end-effector 24 relative to the vessel 16 for line transfer.
It is understood that, in some embodiments, the lower manipulator 18 may provide some amount of compensation based on photosensor output to facilitate positioning of the end effector 24. However, such compensation may not be generally sufficient to achieve positioning of the end-effector 24 and/or may only augment positioning of the end-effector by the upper manipulator 20. Similarly, it is understood that, in some embodiments, the upper manipulator 20 may provide some amount of compensating control based on IMU output to facilitate motion compensation, even though it may be insufficient to achieve full motion compensation.
FIG. 3A is a side elevation view of the upper manipulator 20 coupled to the lower manipulator 18 and in a stowed configuration, in accordance with an embodiment.
FIG. 3B is a side elevation view of the upper manipulator 20 coupled to the lower manipulator 18 and in a partially extended configuration, in accordance with an embodiment.
FIG. 3C is a side elevation view of the upper manipulator 20 coupled to the lower manipulator 18 and in a fully extended configuration, in accordance with an embodiment.
FIG. 3D is a plan view of the upper manipulator 20 coupled to the lower manipulator 18 and in a fully extended configuration, in accordance with an embodiment.
In FIGS. 3A-3D, the double-headed arrows show example motions of the various components and/or configurations of the LTS 12.
For example, the telescoping boom 50 may include a plurality of links 48A, 48B, 48C, 48D, 48E, 48F (e.g. up to six links or more) that are slidably engaged with each other. In some embodiments, the plurality of links 48A-48F may be nested in series and may be sequentially coupled by means of prismatic joints to allow sliding between adjacent links.
In some embodiments, the first link 46 may be pivotably or rotatably (e.g. along a single axis) coupled to the second link 48. The second link 48 may rotatably drivable relative to the first link 46 by an actuator 68 connecting the first link 46 to the second link 48.
In some embodiments, the coupler 44 may be pivotably or rotatably (e.g. along a single axis) coupled to the first link 46. The first link 46 may be rotatably drivable relative to the coupler by an actuator 70 connecting the coupler 44 to the second link 48.
In some embodiments, the link 48F may be pivotably or rotatably coupled (e.g. along a single axis) to the end-effector 24. The end-effector 24 may be rotatably drivable relative to the link 48F by an actuator 71 connecting the link 48F to the end-effector 24.
In some embodiments, at least one of the actuators 68, 70, 71 may be a linear actuator. In some embodiments, at least one of the actuators 68, 70, 71 may be hydraulically or pneumatically powered.
The lower manipulator 18 may be include a platform 72 upon which the upper manipulator may rest or be at least partially (or completely) supported. The lower manipulator 18 may include a plurality of actuators 74.
The coupler 44 may be drivably rotatable on the platform 72 by a motor 78.
The stowed configuration may be particularly advantageous as it may allow the tug-boat to operate without obstructions or with fewer obstructions when not conducting line transfer operations.
It is understood that multiple configurations of the upper manipulator 20 may lead to a similar position and orientation (pose) of the end-effector 24. Given a desired position and orientation of the end-effector 24, an example configuration may be determined by methods of inverse kinematics.
FIG. 4A is a perspective view of the lower manipulator 18, in accordance with an embodiment.
FIG. 4B is a front elevation view of the lower manipulator 18, in accordance with an embodiment.
The lower manipulator 18 may be Stewart platform. The lower manipulator 18 may include a portion 76 to couple to the base 42 of the upper manipulator 20. For example, the portion 76 may be circular. In some embodiments, the portion 76 may be a toothed wheel, e.g. a sprocket or gear, and may coupled with complementary teeth/grooves defined in the base 42 of the upper manipulator 20 to allow rotatably drivable coupling between the lower manipulator 18 and the upper manipulator 20.
The lower manipulator 18 may include a plurality of actuators 74A, 74B, 74C, 74D, 74E, 74F coupled to the platform 72, e.g. coupled in parallel to the platform 72 or an underside thereof. In some embodiments, each of the actuators 74A-74F may be coupled to the platform 72 via a corresponding universal joint.
The actuators 74A-74F may be six actuators arranged in three pairs of actuators. The pairs of actuators 74A, 74B (or 74C, 74D; or 74E, 74F) may be inclined towards each other at the platform 72 and may form two splayed legs extending outwardly from the platform 72.
In some embodiments, the actuators 74A-74F may be linear actuators. In some embodiments, the actuators 74A-74F may be hydraulically or pneumatically powered.
The actuators 74A-74F may allow orientation of the platform in three orthogonal directions. Furthermore, the actuators 74A-74F may allow a limited amount of translation of the platform 72 in three orthogonal directions.
In various embodiments, the lower manipulator 18 may be a six degree of freedom (rotation and translation along three orthogonal axes), three degree of freedom (e.g. rotation about three orthogonal axes), or at least two degree of freedom compensator (e.g. rotation along two orthogonal axes).
In some embodiments, the lower manipulator 18 may be a serial manipulator different than a Stewart platform.
FIG. 5 is a schematic block diagram of a control system 500 of the LTS 12, in accordance with an embodiment.
The controller(s) 504 may be connected to one or more photosensors 502 for receiving photosensor output therefrom. For example, the photosensors 502 may be photosensors of a LIDAR system (3D LIDAR scanner) or a plurality of cameras configured for stereovision. Advantageously, LIDAR system may allow operation in fog and rain, which may be generally associated with rough sea conditions.
As described herein, "photosensor" may include components used to process transducer output, e.g. one or more processor(s) configured to compute distances based on emitted light received back on to an electromagnetic transducer, as in LIDAR systems.
In various embodiments, a sampling frequency of the photosensors 502 may be about 1 Hz.
The controller(s) 504 may be connected to upper manipulator actuator(s) 506, e.g. actuators 68, 70, 71, and/or the motor 78.
The controller(s) 504 may be connected to an inertial measurement unit or IMU 508 for receiving measurements of position and/or orientation (and/or rates thereof) relative to the inertial frame 22.
In some embodiments, the IMU 508 may include an accelerometer. The accelerometer may be a multiaxial accelerometer.
In some embodiments, the IMU 508 may include a gyroscope. The gyroscope may measure orientation based on the principles of conservation of angular momentum. For example, a mechanical gyroscope may include a spinning wheel or disk whose axle is free to take any orientation. This orientation changes much less in response to a given external torque than it would without the large angular momentum associated with the gyroscope's high rate of spin. Since external torque is minimized by mounting the device in gimbals, its orientation remains nearly fixed, regardless of any motion of the platform on which it is mounted.
Gyroscopes may be based on other operating principles, e.g. gyroscopes may include an electronic, microchip-packaged MEMS gyroscope devices as may be found in consumer electronic devices, solid state ring lasers and fibre optic gyroscopes, and quantum gyroscopes, which may be particularly sensitive.
In some embodiments, the IMU 508 may include an inclinometer. The inclinometer (or clinometer) may be configured to measure angles of slope (or tilt), elevation or inclination of an object with respect to gravity. Inclinometers may also be referred to, or include, or be similar to tilt meters, tilt indicators, slope alerts, slope gauges, gradient meters, gradiometers, level gauges, level meters, declinometers, and pitch & roll indicators. In various embodiments, clinometers may measure both inclines (positive slopes, as seen by an observer looking upwards) and declines (negative slopes, as seen by an observer looking downward).
In various embodiments, the IMU 508 may have a sampling frequency of about 100 Hz, or about 100 times the sampling frequency of the photosensor(s) 502. For example, the IMU 508 may have an internal sampling frequency of about 1000 Hz and an output frequency of 100 Hz, e.g. due to digital signal processing (such as Kalman Filtering). In some embodiments, IMU 508 may have an internal sampling frequency of up to 10 kHz. In some embodiments, the output frequency may be substantially the same as the internal sampling frequency.
The controller(s) 504 may be connected to lower manipulator actuator(s) 510, e.g. actuators 74A-74F.
The controller(s) 504 may be connected to upper manipulator sensor(s) 512 and lower manipulator sensor(s) 514. The upper manipulator sensor(s) 512 and lower manipulator sensor(s) 514 may include optical encoders, linear motion sensors, accelerometers, and/or inclinometers.
In various embodiments, user input from a terminal 518 may be provided to the controller(s) 504. For example, the user input may be used to position the end-effector 24 during a deployment phase. For example, the terminal 518 may be a graphical user interface (GUI).
In various embodiments, the controller(s) 504 may receive output from photosensor(s) 502 and actuate the upper manipulator actuator(s) 506 based on this output. Similarly, the controller(s) 504 may receive output from the inertial measurement unit 508 and actuate the lower manipulator actuator(s) 510 based on this output.
The controller(s) 504 may also actuate a photosensor mount 516 based on received input to train the photosensor(s) on to the vessel 16.
FIG. 6 is a schematic block diagram of a lower manipulator control system 600 of the LTS 12, in accordance with an embodiment.
In some embodiments, an accelerometer 601 (which may include one or more accelerometers and/or gyroscopic devices) may generate data indicative of accelerations relative to the inertial frame 22. Such accelerations may include linear and/or angular accelerations. In some embodiments, one or more sensor(s) 603 may provide data indicative of positions and velocities, e.g. Euler angles and rotational velocities. The accelerometer 601 and/or sensor(s) 603 may generate sensed data 602. The sensed data 602, and data indicative of one or more reference signal(s) 604, e.g. data indicative of target accelerations, which may be specified by a user input in a terminal 518, 518A, may be provided to an outer loop controller 606. The outer loop controller 606 may generate a target or desired configuration of the platform orientation (based on the reference signal(s) 604) or a rate thereof based on inverse kinematics and data input to the outer loop controller 606.
The target configuration may be received by a lower manipulator (LM) actuator control 608, which may determine actuation of actuators of the lower manipulator 18 based on the target configuration, e.g. hydraulic and/or electromechanical actuators. For example, LM actuator control signal(s) 610 may be determined. The LM actuator control signal(s) 610 may be used to vary a position or state of actuator(s) of the lower manipulator 18. For example, the LM actuator control signal(s) 610 may include voltages applied to one or more hydraulic actuators.
The outer loop controller 606 may determine the lower manipulator actuator's length set points and/or rate set-points. In various embodiments, control gain(s) may at least partially determine outputs of the outer loop controller 606. In various embodiments, the control gain(s) may be fixed values tuned to give a desired response to changing actuator length and/or rate set-points. For example, a desired response may be determined based on settling time and oscillation. A desired response may be characterized as settling rapidly and having low oscillation. In various embodiments, control gains may be determined based on gain scheduling and/or by methods of adaptive control.
In various embodiments, for example, the reference signal(s) 604 may include target rotational accelerations that are zero, and free (or uncontrolled) translational accelerations, to achieve motion compensation.
FIG. 7 is a schematic block diagram of an upper manipulator controller system 700 of the LTS 12, in accordance with an embodiment.
A photosensor may generate data (photosensor output 702) indicative of a field of view and/or a point cloud, e.g. a point cloud with associated distances, including a target 701.
A machine vision system 704 may receive the photosensor output 702, and may be configured to generate coordinates 706 of the target 701 based on the photosensor output 702. As mentioned earlier, coordinates 706 of the target 701 may be provided relative to the tug-boat 10 or the upper manipulator 20. In various embodiments, coordinates 706 may include partial coordinates. The machine vision system 704 may track a position of the target by generating coordinates of the target as the target moves relative to the photosensor and/or at predetermined time intervals (i.e. at a given frequency).
In various embodiments, tracking of the target by the machine vision system 704 may include video tracking and/or motion capture. In some embodiments, as described later, tracking of the target may be used for controlling one or more actuators. For example, visual servoing may be achieved.
An outer loop controller 710 may receive the coordinates 706 and target coordinates 708. The target coordinates 708 may be provided by a user via a terminal 518, 518B. For example, the target coordinates 708 may be fixed coordinates. The outer loop controller 710 may generate a target or desired configuration of the upper manipulator 20 to achieve the target coordinates 708. For example, the outer loop controller 710 may include a model for inverse kinematics.
The target configuration may be received by an upper manipulator (UM) actuator control 710, which may determine actuation of actuators of the upper manipulator 20 based on the target configuration, e.g. hydraulic and/or electromechanical actuators. For example, UM actuator control signal(s) 712 may be determined. The UM actuator control signal(s) 712 may be used to vary a position or state of actuator(s) of the upper manipulator 20.
For example, the UM actuator control signal(s) 712 may include voltages applied to one or more hydraulic actuators.
In various embodiments, the machine vision system 704 may process data in accordance with a machine learning model, e.g. a supervised machine learning model.
In some embodiments, the machine vision system 704 may include or implement one or more image segmentation methods, including a process of dividing an image into different parts-such as ship and ocean-representing different objects.
In some embodiments, the machine vision system 704 may include or implement robust statistic methods, using robust estimations tolerant to outliers.
In some embodiments, the machine vision system 704 may include or implement RANSAC (RANdom SAmple Consensus). RANSAC is a robust estimation method used in computer vision.
In some embodiments, the machine vision system 704 may include or implement one or more Kalman filters, using measurements that are observed over time that contain noise (random variations) and other inaccuracies, to produce values that tend to be closer to the true values of the measurements and their associated calculated values.
In some embodiments, the machine vision system 704 may include or implement PnP/P3P (Perspective-n-Point), which may include determining the pose of a 3D object based on its 2D image with limited number of reference points.
In some embodiments, the machine vision system 704 may include or implement texture segmentation, including segmentation of an image based on differences in the visual appearance-i.e. texture-of the different objects in the image.
In some embodiments, the machine vision system 704 may include or implement Filter based texture segmentation (FBTS), which may be based on an assumption that visual differences (i.e. the texture of the objects) have different statistical properties. By applying suitable filters (which depends on the textures), important properties for separating the objects may be enhanced, while less important visual properties may be suppressed. The image may be segmented by comparing a histogram of filter responses for the different objects.
In some embodiments, the machine vision system 704 may be used to divide an image into regions containing ships and ocean, e.g. using filter based texture segmentation. The visual appearance of ships and ocean viewfrom the sensor position may be very different. The visual appearance of the ocean may be a rather large homogenous region with the similar brightness with few or no edges, while the visual appearance of the ship floor may be homogenous with a lot of brightness variation and a lot of edges. Filter based segmentation may detect if ships or ocean are present in the image, furthermore it may classify the detected region as ship or ocean. The output of the ship/ocean segmentation may be the current location of the ship and ocean. The previous position of the ship/ocean may be used as a starting seed for the segmentation of current frame.
In some embodiments, the machine vision system 704 may include or implement Model Predictive Control, or MPC, or another a multivariable control algorithm. For example, a multivariable control algorithm may be based on an internal dynamic model of the process, a history of past control moves, and an optimization cost function J over the receding prediction horizon, to calculate the optimum control moves.
In some embodiments, the machine vision system 704 may include or may implement SLAM (Visual Simultanous Localization and Mapping). SLAM may include a group of techniques to localize an object (e.g. the target) while simultaneously and building a map of the environment. Visual SLAM may refer to using one or more cameras as input sensors to estimate locations and maps.
In some embodiments, the machine vision system 704 may include or may implement SFM (Structure From Motion), which may include a group of techniques where multiple images from one or more cameras are used to create 2D or 3D structure.
In some embodiments, the machine vision system 704 may include or may implement ray tracing. Ray tracing is a process of extending a ray (vector) from a focal point of a camera through a given 2D image coordinate, given the relative pose between a camera and an object, including the internal camera parameters. The goal of ray tracing may include finding the intersection coordinate (3D) with the object, if it exists. Internal camera parameters may include focal length, image centre, resolution, and/or lens distortion parameters (radial distortion, tangential distortion).
In some embodiments, the machine vision system 704 may implement an adaptive tracking mode where the distance from the end-effector 24 to the vessel 16 close may be determined using a measurement device. For example, in some embodiments, ship segmentation may be used in order for the measurement device to always be directed toward the vessel 16 (and not toward the ocean) and to determine a current location of the ship. In various embodiments, the orientation of the measurement device is updated such that it is directed toward a position on the ship close to the current position of the cargo.
In some embodiments, systems and methods for machine vision may include manually extracting a bounding box around a target (e.g. a bollard, fairlead, deck, and/or the hull), or using a target detector system to find the target. The center of the bounding box may be assumed to be the center of the target. In some embodiments, systems and methods for machine vision may include computing feature points on the target. To avoid boundary problems only features well inside the bounding box may be considered. In some embodiments, systems and methods for machine vision may include validating feature points to determine whether enough good feature points have been found in the bounding box. If enough good feature points cannot be found on the target, then it may not be tracked and other feature points may be used.
In some embodiments, systems and methods for machine vision may include collecting/receiving/providing a new (image) frame t (at a discrete time t). An image frame and feature points from the previous step may be stored and denoted t-1. In some embodiments, systems and methods for machine vision may include image validation to decide whether the image is good enough for further processing. Basic image quality measurements may be carried out for detection of bad image quality due to for example motion blur and/or over/under exposure. In some embodiments, systems and methods for machine vision may include finding feature points found in the previous image (t-1), in the current image t. Some of the feature points present in image t-1 may not be found in image t. In some embodiments, systems and methods for machine vision may include estimating whether sufficiently good features are found, e.g. is the number of features found sufficient to estimate the motion of the target between the two frames? In some embodiments, systems and methods for machine vision may include estimating 2D and/or 3D motion (rotation and translation) between t-1 and t using the matched feature points and a Kalman filter. The estimation may be computed using robust statistic methods such as RANSAC (i.e. a few outliers in the data set have little influence on the estimation). The estimation may include using a 3x3 matrix that represents the rigid-body transformation of the target. In some embodiments, systems and methods for machine vision may include verifying whether the transformation against the movements of the individual feature points between t-1 and t is acceptable, e.g. that the mean squared error (or the median squared error), MSE, is sufficiently small. In some embodiments, systems and methods for machine vision may include adding newly invented feature points within a current bounding box of the target. Some features may be lost during tracking of the target, therefore it is necessary to add new feature in each step.
In some embodiments, systems and methods for machine vision may include using correlation as a recovery strategy if too few features are found and matched for determination of the current position of the target. The last known location of the target may be correlated with the current image to detect and determine the current position of the target.
In some embodiments, systems and methods for machine vision may include using Feature Motion Clustering as a recovery strategy if the feature points are (e.g. found to be) moving inconsistently. In some cases, recovery may be used when some feature points are detected on other parts of the vessel 16 other than the target. Feature Motion Clustering may result in two types of feature point motion: one set of feature points that moving along with the target and one set which are moving along with the vessel 16 but not with the target. By clustering the feature motion into two clusters, the motion of the target may be determined. In some embodiments, systems and methods for machine vision may include using transformation validation to decide whether the feature motion clustering recovery was successful and a successful strategy.
FIG. 8 illustrates a block diagram of a computing device 800, in accordance with an embodiment of the present application.
As an example, one or more of the system(s) 500, 600, 700, the terminal 518, and/or the controller(s) 504 may be implemented using the example computing device 800 of FIG. 8.
The computing device 800 includes at least one processor 802, memory 804, at least one 1/O interface 806, and at least one network communication interface 808.
The processor 802 may be a microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or combinations thereof.
The memory 804 may include a computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto optical memory, erasable programmable read-only memory (EPROM), and electrically erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM).
The 1/O interface 806 may enable the computing device 800 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output devices such as a display screen and a speaker.
The networking interface 808 may be configured to receive and transmit data sets representative of the machine learning models, for example, to a target data storage or data structures. The target data storage or data structure may, in some embodiments, reside on a computing device or system such as a mobile device.
FIG. 9 is a flow chart of a computer-implemented method 900 of transferring a line from a tug-boat to a vessel.
Step 902 of the method 900 may include configuring a lower manipulator coupled to the tug-boat to orient an upper manipulator in accordance with an inertial frame.
Step 904 of the method 900 may include tracking, by one or more processors, a target position on the vessel relative to an end-effector of the upper manipulator based on output of one or more photosensors trained on the vessel.
Step 906 of the method 900 may include configuring the upper manipulator to position the end-effector relative to the target position, the end-effector coupled to the line to allow transfer of the line from the tug-boat to the vessel.
As can be understood, the examples described above and illustrated are intended to be exemplary only.
The embodiments described in this document provide non-limiting examples of possible implementations of the present technology. Upon review of the present disclosure, a person of ordinary skill in the art will recognize that changes may be made to the embodiments described herein without departing from the scope of the present technology. For example, the lower manipulator may be a serial manipulator comprising a plurality of links coupled together in series. Yet further modifications could be implemented by a person of ordinary skill in the art in view of the present disclosure, which modifications would be within the scope of the present technology.
The term "connected" or "coupled to", and arrows in flow charts, may indicate both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements).
As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the embodiments are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims (4)

  1. WHAT IS CLAIMED IS:
    Any and all features of novelty or inventive step described, suggested, referred to, exemplified, or shown herein, including but not limited to processes, systems, devices, and computer-readable and -executable programming and/or other instruction sets suitable for use in implementing such features.
    1. A computer-implemented method of transferring a line from a tug-boat to a vessel, comprising:
    configuring a lower manipulator coupled to the tug-boat to orient an upper manipulator in accordance with an inertial frame;
    tracking, by one or more processors, a target position on the vessel relative to an end-effector of the upper manipulator based on output of one or more photosensors trained on the vessel; and
    configuring the upper manipulator to position the end-effector relative to the target position, the end-effector coupled to the line to allow transfer of the line from the tug-boat to the vessel.
  2. 2. A non-transitory computer-readable medium having stored thereon machine interpretable instructions which, when executed by one or more processors, cause the one or more processors to perform the computer-implemented method of claim 1.
  3. 3. A system for transferring a line from a tug-boat to a vessel, comprising:
    a lower manipulator mounted on to the tug-boat;
    an upper manipulator defining an end-effector coupled to the line, the upper manipulator being coupled to the lower manipulator for orientation of the upper manipulator by the lower manipulator;
    one or more photosensors trained on the vessel to detect a target position on the vessel;
    one or more processors connected to the one or more photosensors to receive output from the one or more photosensors indicative of the target position, the lower manipulator and the upper manipulator being actuatably connected to the one or more processors; computer-readable memory coupled to the one or more processors and storing processor-executable instructions that, when executed, configure the one or more processors to cause: configuring of the lower manipulator to orient the upper manipulator in accordance with an inertial frame, determining, by the one or more processors, a separation vector between the target position and the end-effector based on the output of the one or more photosensors, and configuring of the upper manipulator based on the separation vector to position the end-effector in proximity of the vessel to allow transfer of the line from the tug-boat to the vessel.
  4. 4. An apparatus for transferring a line from a tug-boat to a vessel, comprising:
    one or more photosensors configured to be trained on the vessel to detect a target position on the vessel;
    an upper manipulator defining an end-effector configured to be coupled to the line, the upper manipulator being configurable to position the end-effector relative to the target position to allow transfer of the line from the tug-boat to the vessel; and
    a lower manipulator coupled to the tug-boat and the upper manipulator, the lower manipulator being configurable to orient the upper manipulator in accordance with an inertial frame.
AU2023202325A 2022-06-07 2023-04-17 System and method for tug-boat line transfer Pending AU2023202325A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263349938P 2022-06-07 2022-06-07
US63/349,938 2022-06-07

Publications (1)

Publication Number Publication Date
AU2023202325A1 true AU2023202325A1 (en) 2023-05-11

Family

ID=86228350

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2023202325A Pending AU2023202325A1 (en) 2022-06-07 2023-04-17 System and method for tug-boat line transfer

Country Status (1)

Country Link
AU (1) AU2023202325A1 (en)

Similar Documents

Publication Publication Date Title
AU2017242671B2 (en) A method and system for operating one or more tugboats
US7854569B1 (en) Underwater unmanned vehicle recovery system and method
KR101965846B1 (en) UUV Recovery Device and Control Method thereof
CA3019449C (en) System for navigation of an autonomously navigating submersible body during entry into a docking station, method
GB2267360A (en) Method and system for interacting with floating objects
FI129378B (en) Apparatus and method for automated handling of mooring ropes
WO2015049679A1 (en) Launch and recovery system and method
EP3436340B1 (en) A tugboat with a capsizing and sinking prevention system
CN109153432A (en) For operating the method and system of one or more towboat
WO2015044898A1 (en) Two body motion compensation system for marine applications
JP5884978B2 (en) Underwater vehicle lifting device and method
DK201670189A1 (en) A tugboat with a crane for handling a towing line
WO2017167893A1 (en) Boat with connection to shore
AU2023202325A1 (en) System and method for tug-boat line transfer
Kojima et al. Development of autonomous underwater vehicle'AQUA EXPLORER 2'for inspection of underwater cables
JP2016132406A (en) Underwater sailing body lifting-storage system
CN113479301A (en) Life buoy laying system
DK179117B1 (en) Tugboat with crane or robot arm
JP2019038543A (en) Underwater sailing body lifting system
KR101812027B1 (en) Method and system for estimating location of a plurality of underwater robot connercted by cable
WO2022239580A1 (en) Mooring system and mooring method
WO2023222922A1 (en) Launch and recovery method for unwired auvs and uuvs
CN114789772A (en) Docking device for recovering underwater robot, unmanned ship and recovery method
CN113156981A (en) Intelligent underwater obstacle avoidance system and autonomous obstacle avoidance method based on same
KR20190132831A (en) Towing system and method for floating crane