WO2017205916A1 - Three dimensional object mapping - Google Patents

Three dimensional object mapping Download PDF

Info

Publication number
WO2017205916A1
WO2017205916A1 PCT/AU2017/050516 AU2017050516W WO2017205916A1 WO 2017205916 A1 WO2017205916 A1 WO 2017205916A1 AU 2017050516 W AU2017050516 W AU 2017050516W WO 2017205916 A1 WO2017205916 A1 WO 2017205916A1
Authority
WO
WIPO (PCT)
Prior art keywords
devices
template
ship
contour
ranging
Prior art date
Application number
PCT/AU2017/050516
Other languages
French (fr)
Inventor
Shanil Mario Herat
Original Assignee
Amlab Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2016902054A external-priority patent/AU2016902054A0/en
Application filed by Amlab Pty Ltd filed Critical Amlab Pty Ltd
Priority to AU2017274080A priority Critical patent/AU2017274080B2/en
Publication of WO2017205916A1 publication Critical patent/WO2017205916A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • a method and apparatus for creating a three dimensional model of an object that is mobile relative to a location More preferably, the method and apparatus of the present invention allows for creating a three dimensional model of a ship relative to a quay.
  • a method for creating a three dimensional model of an object that is mobile relative to a location comprising: creating a template for the object using range sensor data from one or more ranging devices; capturing contour scan data of the object using range sensor data from one or more contour scanning devices; and combining the contour scan data of the object and the object template to create the three dimensional model of the object relative to the location.
  • the object is mobile and the location is stationary.
  • the object is stationary and the location is mobile.
  • both the object and the location are mobile.
  • the term stationary here should be understood to mean in a fixed position relative to the earth.
  • a three dimensional model is a mathematical representation of the three dimensional surfaces of an object.
  • the three dimensional model can be rendered as a two dimensional image or used in a computer simulation of physical phenomena.
  • contour scan data consists of three dimensional positional information which is used to capture the shape of the object. This can range from a few representative points as desired, to many tens of thousands forming a detailed image. The collected data can then be used to construct a digital three dimensional model. Whilst the contour scan data alone may be sufficient to produce a three dimensional model of a stationary object, motion blur often makes it unsuitable for objects with inconsistent movement. The inventors of the present invention have discovered that the combination of the contour scan data and the template allows for the creation of an accurate three dimensional model, even when the object is not stationary during the contour scanning.
  • a template is a pattern used as a guide for alignment of data. More specifically, the template aligns the data from the one or more contour scanning devices to the object.
  • the template is a rigid reference target of any shape which is representative of the object's position and orientation. At a minimum, the template is two lines formed by three non-collinear points. More preferably, the template is a representative box. Still preferably, the template is a bounding box. As would be understood by a person skilled in the art, a bounding box is a volume which completely contains the selected portions of the object. Still preferably, the bounding box has a cuboid geometry.
  • combining the contour scan data of the object and the object template more specifically comprises consolidating sensor data from both the ranging and contour scanning devices into an effective and coherent three dimensional model of the object relative to the location. More preferably, contour scan data of the object is rotated and translated to align to the object template.
  • the inventors have determined that the combination of the contour scan data and the object template allows for the contour scan data to be rotated and translated by the object template position and orientation to account for any changes to the position and orientation of the object with respect to the location.
  • the three dimensional model is a representative box. In another form of the present invention, the three dimensional model is a bounding box.
  • the range sensor data more specifically comprises direction and distance information.
  • scan data from the one or more contour scanning devices may also be used to create the template of the object.
  • scan data from the one or more ranging devices may also be used to capture contour scan data of the object.
  • At least one of the one or more ranging devices is in a fixed position relative to the location. In an alternative form of the present invention, at least one of the one or more ranging devices is mobile relative to the location. The inventors have determined that even if the one or more ranging sensors are mobile with respect to the location, the range sensor data captured can still be used to create the object template, provided that the location of the ranging sensors with respect to the location is known.
  • the step of: creating a template for the object using range sensor data from one or more ranging devices at the location more specifically comprises: creating a template for the object using range sensor data from two or more ranging devices at the location.
  • At least one of the one or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the one or more ranging are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more ranging devices are Radio Detection and Ranging (RADAR) devices.
  • LiDAR Light Detection and Ranging
  • RADAR Radio Detection and Ranging
  • At least one of the one or more contour scanning devices is in a fixed position relative to the location. In an alternative form of the present invention, at least one of the one or more contour scanning devices is mobile relative to the location. Where two or more contour scanning devices are used it will be understood that a combination of fixed and mobile contour scanning devices can be used.
  • At least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, the one or more contour scanning devices are RADAR devices.
  • comparison of range sensor data from consecutive scans of the one or more ranging devices is used to calculate the movement of the object.
  • the comparison of the range sensor data from the one or more ranging devices is carried out by a processer configured to calculate a positional relationship between the location and the object.
  • the dimensions of the object are calculated from the three dimensional model of the object.
  • the object has a body, with at least one rigid region uniquely identifiable from other regions.
  • the step of creating an object template more specifically comprises directing one or more ranging devices towards rigid region(s) of the object to establish the template. More preferably, one or more ranging devices are dedicated to capturing rigid region(s) of the object to establish the template.
  • range sensor data sets comprise direction and distance data, preferably in the form of azimuth, elevation and distance.
  • the step of creating an object template more specifically comprises the capture of at least three non-collinear points of each rigid regions key features to establish a template.
  • the step of creating an object template more specifically comprises directing at least one of the one or more ranging devices towards each of the two ends of the object to capture end range data sets.
  • directing the ranging devices towards each of the ends of the object template ends can be created.
  • the combination of the template end data with other range data sets allows for a more accurate template to be established.
  • the step of creating an object template more specifically comprises using at least one of the one or more ranging devices to capture where the body meets the top surface to create a template edge.
  • the combination of the template edge data with other range data sets allows for a more accurate template to be established.
  • the step of creating the template enables movement of the object across multiple axes to be tracked.
  • Such movements include up, down, left, right, forward, backward, yaw, pitch and roll of the object.
  • a method for creating a three dimensional model of a ship relative to a quay comprising, creating a ship template using range sensor data from two or more ranging devices; capturing contour scan data of the ship using range sensor data from one or more contour scanning devices; combining the contour scan data of the ship and the ship template to create the three dimensional model of the ship relative to the quay.
  • a template is a pattern used as a guide for alignment of data. More specifically, the ship template aligns the data from the one or more contour scanning devices to the ship.
  • the ship template is a rigid reference target of any shape which is representative of the ships's position and orientation. At a minimum, the ship template is two lines formed by three non-collinear points. More preferably, the ship template is a representative box. Still preferably, the ship template is a bounding box. As would be understood by a person skilled in the art, a bounding box is a volume which completely contains the selected portions of the ship. Still preferably, the bounding box has a cuboid geometry.
  • scan data from the one or more contour scanning devices may also be used to capture range sensor data.
  • scan data from the two or more ranging devices may also be used to capture contour scan data of the ship.
  • the three dimensional model of a ship is a ship hull bounding box.
  • comparison of range sensor data from consecutive scans of the two or more ranging devices is used to calculate the movement of the ship.
  • the comparison of the range sensor data from the two or more ranging devices is carried out by a processer configured to calculate a positional relationship between the quay and the ship.
  • At least one of the two or more ranging devices is in a fixed position relative to the quay.
  • the quay is stationary.
  • the quay is a floating platform.
  • At least one of the two or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the two or more ranging devices are three dimensional LiDAR devices. In an alternative form of the present invention, the two or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the two or more ranging devices are RADAR devices.
  • LiDAR Light Detection and Ranging
  • At least one of the one or more contour scanning devices are in a fixed position relative to the quay.
  • at least one of the one or more contour scanning devices is mobile. Where two or more contour scanning devices are used it will be understood that a combination of stationary and mobile contour scanning devices can be used.
  • At least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more contour scanning devices are RADAR devices.
  • the three dimensional model of the ship allows for the calculation of the dimensions of the ship.
  • the ship has a hull, with a bow, stern and deck.
  • the step of creating a ship hull bounding box template more specifically comprises directing one or more ranging devices towards each of the bow and stern of the ship to capture bow and stern range data sets.
  • the bounding box template ends can be created.
  • the step of creating a ship hull bounding box template more specifically comprises using one or more ranging devices to capture where the hull meets the deck amidship to establish a bounding box edge.
  • the step of creating a ship hull bounding box template more specifically comprises, combining the bounding box ends with the bounding box edge to create the upper and side planar face of the bounding box.
  • the step of creating the ship hull bounding box template enables movement of the ship across multiple axes to be tracked. Such movements include heave, sway, surge, yaw, pitch and roll. Additionally, water level measurement enables freeboard and draft of the ship to be calculated.
  • an apparatus for creating a three dimensional model of an object that is mobile relative to a location comprising: one or more ranging devices to capture template data; one or more contour scanning devices to capture contour data; and at least one processor to combine the template data and the contour data to produce the three dimensional model of the object relative to a location.
  • At least one of the one or more ranging devices is in a fixed position relative to the location.
  • the apparatus comprises two or more ranging devices.
  • At least one of the one or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the one or more ranging devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more ranging devices are RADAR devices.
  • LiDAR Light Detection and Ranging
  • At least one of the one or more contour scanning devices is in a fixed position relative to the location.
  • at least one of the one or more contour scanning devices are mobile. Where two or more contour scanning devices are used it will be understood that a combination of stationary and mobile contour scanning devices can be used.
  • At least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more contour scanning devices are RADAR devices.
  • the object is mobile and the location is stationary.
  • the object is stationary and the location is mobile.
  • both the object and the location are mobile.
  • the term stationary here should be understood to mean in a fixed position relative to the earth.
  • Figure 1 is an upper perspective view of the use of the method of the present invention to create a three dimensional mapping of a ship
  • Figure 2 is a reversed upper perspective view of the method shown in Figure 1 ;
  • Figure 3 is a perspective view of the use of the method of the present invention in calculating ship orientation and motion
  • Figure 4 is a top view of the use of the method of the present invention to account for the orientation of the ship when capturing contour data;
  • Figure 5 is an upper perspective view of the use of the method of the present invention to align the bounding box template
  • Figure 6 is an upper perspective view illustrating the problems associated with perceptual aliasing
  • Figure 7 conceptually illustrates a series of experiments performed to
  • Figure 8 documents experiment outcomes for the use of the method of the present invention for a first pass with ship yaw to starboard; and Figure 9 documents experiment outcomes for the use of the method of the present invention for a second pass with ship yaw to port.
  • FIG. 1 to 5 there is shown an arrangement for the three dimensional mapping of an object, for example a ship 125 relative to a location, for example a quay 100 using data from a plurality of sensors.
  • the plurality of sensors comprises of a bow facing three dimensional LiDAR ranging sensor 110, a stern facing three dimensional LiDAR ranging sensor 115 (together the quay LiDAR ranging sensors) and a three dimensional LiDAR contour scanning sensor 120.
  • the three dimensional LiDAR contour scanning sensor 120 is positioned on ship loading/unloading equipment 105.
  • Each of the sensors 110, 115 and 120 output a three dimensional point cloud relative to each sensor.
  • the position and orientation of the quay LiDAR sensors 110, 115 are captured during installation relative to arbitrary global origin 101.
  • the position and orientation of a LiDAR contour scanning sensor is able to be calculated from the loading/unloading equipment 105 position by way of encoders and/or GPS.
  • the global position of each of the points can be obtained by rotating and translating each point cloud dataset by respective scanners position and orientation relative to the global origin 101.
  • a Cartesian coordinate system is established, aligning the ⁇ 7 ⁇ axis to the length, height, width of the quay. This forms the framework around which all rotations, translations and geometric line/plane calculations are computed.
  • bow LiDAR 110 and stern LiDAR 115 are such that their field of view also captures within the point cloud dataset, points on the quayside amidship planar hull section 135. Although entire darkened planer hull section 35 is highlighted in the Figures, it is not necessary to capture three dimensional point cloud data of the entire hull section only a sufficient number of points to confidently establish the plane on which the hull section lies. At least three non-collinear points 136, 137, 138 are required to establish the plane. In practice, more non-collinear points may be obtained, enabling filtering, refinement and cross checking to be performed.
  • an array of sensors could be directed towards the ship's hull along the quay to allow for any occlusion that may need to be overcome or to allow for varying vessel sizes if handling both small to large vessels where a one size fits all solution may be inappropriate.
  • An array of sensors be it three dimensional or 2D, may also be used to provide greater flexibility as to ship berthing location.
  • An array of sensors along the quay also allows for greater availability and safety where a voting system can be used to provide cross checking and fault tolerance to any particular LiDAR sensor failure which is very important for fully autonomous control.
  • Alternate embodiments can also involve positioning the ranging or scanning devices on UAVs, as long as their position and orientation is accurately known, typically via GPS. Communication does not need to be wired and processing of data does not need to be central. Key feature point extraction of bow, stern and deck / planer hull edge can be distributed, requiring only relatively small amounts of data to be exchanged of key points, which can be achieved wirelessly. This is in contrast to the communication of large point cloud datasets where wired solutions would be required, typically fibre / Gigabit Ethernet, given bandwidth requirements.
  • FIG. 3 Detailed in Figure 3 is one possible set of points that establishes amidship rectangular hull section 135.
  • Hull deck edge 134 formed by line between points 136 and 137 enables the ships yaw 160 and pitch 162 to be immediately calculated relative to location using simple trigonometry to yield angle from global X/Y/Z axes/planes conveniently aligned to horizontal/vertical quay length, height, width edges.
  • the third non-collinear point near the vessel hull waterline 138 can then be used to establish the hull plane 135 and simple trigonometry yields the ship roll angle relative to location.
  • the bow and stern LiDAR 110, 115 point cloud dataset can be rotated and translated by calculated yaw, pitch and roll, resulting in ship dataset aligned to horizontal/vertical quay length (X), height (Y), width (Z) edges.
  • the extreme points along X axis 130, 140 can then be used to establish bounding box 145 end planes perpendicular to both vessel planer hull 135 and deck plane 125.
  • end planes 130 and 140 established, vessel surge 166, heave 168 and sway 170 motion relative to location is easily established from changes to bounding box 145 X/Y/Z values over time in the Cartesian space relative to the origin 101.
  • ship freeboard can be calculated.
  • the draft can also be calculated.
  • NREG Nonlinear Regression and Curve Fitting
  • Analysis of bow symmetry to obtain vessel beam overcomes occlusion of vessel far side planar hull edge which is likely to be obscured from bow LiDAR scanner view.
  • perceptual aliasing is when two or more images places can be perceived as the same. As example of this is shown in Figure 6, where the image perceived by the scan data 450 could be of several locations 455, 456, 457 and from the scan data capture alone 450, the control system is not able to distinguish between them 455, 456, 457.
  • the inventors have discovered that by using one or more ranging devices, perceptual aliasing is overcome.
  • the top elevated and rear views in Figure 4 indicates a single ray 200 being cast from the contour LiDAR scanner onboard the loading/unloading equipment 105 where the scanner would typically output a direction and distance to the reflection 205.
  • the direction of the ray 200 is arbitrarily chosen to be parallel with the Z axis. Without the use of the template techniques disclosed, it is not possible to exactly place this reflection point 205 in an absolute sense without the bounds of vessel known. The many ranging points would only offer useful information once all points have been placed relative to each other and bounds established.
  • motion blur is a problem faced in scanning the ships superstructure, especially when the scanner is located on mobile equipment. Even while moored, sway and other motions can be induced by passing ships especially if mooring equipment does not have automatic tension functionality. Limited ability to progressively scan while the ship maybe moving also means that scanning must wait till the vessel is completely moored, where significant time could potentially be saved if scanning could be started while ship is in its final stages of berthing.
  • Figure 5 illustrates the problems associated with motion blur for an example scenario of two scanning images involving receding tides and the vessel falling 310 relative to the quay loading/unloading equipment.
  • Light grey/horizontal stripe pattern 325 is the initial image seen by the contour LiDAR scanner and dark grey/vertical stripe pattern represents is the final image seen 330.
  • the top perspective and rear views illustrate the use of the bounding box template functionality to offset 320 origin and direction of contour scanning rays by fixing bounding box 300 in mapping space.
  • the fall of the vessel relative to the quay and loading/unloading equipment is the equivalent of holding stationary the vessel / bounding box template 300 and adjusting the contour scanning LiDAR 315 by raising it 320. It can be seen from top perspective and rear views, that although environment visible from raised location is different 325/330, the data or image within the pyramid field of view common to both positions 330 align/overlap.
  • FIG. 5 bottom perspective and rear views illustrates the problem faced when ships position and orientation are not accurately known without the use of the template method of the present invention.
  • the bottom graphics show the differencing hull bounding boxes 305, 310 (for illustration of ship position only) with all that is known by the software system is the stationary LiDAR contour scan 315 image.
  • the environment common to both positions is the same as top graphics, it can be seen that the images seen by the system no longer overlap/align.
  • Very complex techniques are needed to shift bottom LiDAR images prior/post receding tides to align common regions. This is further complicated when rotation and translation motion is experienced across multiple axis.
  • a series of tests were performed on a vessel prototype to demonstrate the use of the method of the present invention to create a three dimensional model of the vessel prototype relative to a simulated quay, with contour scans adjusted to compensate for induced motion blur and perceptual aliasing.
  • the results of the tests are shown in Figures 7 to 9.
  • the scanner and vessel prototype configuration in the experiments were chosen to align with the configuration shown in Figures 1 to 5.
  • contour scans were taken from the simulated quay directed towards the top and side of the vessel prototype.
  • a pass is understood to mean the collection of contour scans obtained along the longitudinal length of the vessel from the simulated quay. 2D plots of the three dimensional mapping space were prepared to aid illustration of the methodology.
  • FIGs 7(a) - (d) Illustrated in Figures 7(a) - (d) are the key experiment steps and outcomes.
  • contour scans 510 show the image captured by a first pass 505 over the length of the vessel prototype with a yaw to starboard represented by line 500.
  • contour scans 530 show the image captured by a second pass 525 over the length of the vessel prototype with a yaw to port side represented by line 520.
  • the distorted composite image 535 is the result of combining contours scans 510 and 530.
  • Figures 7c and 7d show contour scans captured using the method of the present invention.
  • contour scans 550 show the image captured by a first pass 545 over the length of the vessel prototype with a yaw to starboard represented by the line 540, same as that previous of 500 and 505, but image rotated in mapping space such that the established bounding box is aligned on the vertical/horizontal axis.
  • contour scans 570 show the image captured by a second pass 565 over the length of the vessel prototype with a yaw to port side represented by the line 560, same as that previous of 520 and 525, but image rotated in mapping space such that the established bounding box is aligned on the vertical/horizontal axis.
  • the rotations in mapping space are evident from the slope of the contour scan lines/rays in 550 and 570.
  • the correct composite image 575 results when combining contours scans 550 and 570.
  • Figure 8 contains experiment output for the first pass corresponding with yaw to starboard and Figure 7c.
  • Points 600 indicate loading/unloading equipment and direction of contour scanning rays.
  • Points 605 and 610 indicate hull data captured by ranging scanners 110 and 115 respectively.
  • Points 620 indicate filtered hull data rotated and translated to align bounding box in mapping space.
  • Points 615 indicate same rotations and translations applied to contour scan ray direction 600.
  • Points 625 show three dimensional contour scans for the first pass.
  • Figure 9 contains experiment output for the second pass corresponding with yaw to port and Figure 7d.
  • Points 600 indicate loading/unloading equipment and direction of contour scanning rays.
  • Points 605 and 610 indicate hull data captured by ranging scanners 110 and 115 respectively.
  • Points 620 indicate filtered hull data rotated and translated to align bounding box in mapping space.
  • Points 615 indicate same rotations and translations applied to contour scan ray direction 600.
  • Points 630 show three dimensional contour scans for the second pass with correct object shape captured and data correctly overlaying when combined with first pass contour scans 625.

Abstract

A method for creating a three dimensional model of an object that is mobile relative to a location, the method comprising: creating a template for the object using range sensor data from one or more ranging devices; capturing contour scan data of the object using range sensor data from one or more contour scanning devices; and combining the contour scan data of the object and the object template to create the three dimensional mapping of the object relative to the location.

Description

Three Dimensional Object Mapping
TECHNICAL FIELD
[0001] In accordance with the present invention, there is provided a method and apparatus for creating a three dimensional model of an object that is mobile relative to a location. More preferably, the method and apparatus of the present invention allows for creating a three dimensional model of a ship relative to a quay.
BACKGROUND ART
[0002] The following discussion of the background art is intended to facilitate an understanding of the present invention only. The discussion is not an acknowledgement or admission that any of the material referred to is or was part of the common general knowledge as at the priority date of the application.
[0003] Seaborne cargo is the predominant means of transportation of raw materials and value added products. At present, the majority of goods transport operations are controlled directly by a human. Variances such as operator skill and fatigue result in performance which is less than optimal. Additionally, when an error is made, the result can be dangerous and costly. Aside from the clear benefits of reducing incidents and stoppages, even marginal improvements in performance translates to significant increases in productivity when compounded across millions of tonnes of cargo.
[0004] In the drive to reduce costs and time and increase efficiency and safety, automation is playing a pivotal role in the optimisation of operations. At some large container handling terminals, many parts of container handling have been automated such as the automatic stacking cranes and transport to/from using automatic guided vehicles. At some large bulk handling terminals, the majority of the equipment such as stackers, reclaimers and conveyor systems in the stockyard, are also fully automated. However, one part of the transport operation which is still controlled manually is the ship loading/unloading process. This is typically completed from either an operator cabin onboard the ship loader/unloader or remote controlled from a central control room with the use of video feeds. [0005] The greatest challenge faced with fully autonomous ship loading/unloading operations in correctly positioning loading/unloading equipment, is for the control system to be able to account for the motion of the ship relative to the equipment. This motion is affected by such things as winds, tides, ballasting and even changing load distribution caused by loading/unloading of cargo. Ship loading/unloading often also faces a changing operational layout, such as ship crew opening/closing hatches or movement of deck cranes, where no feedback readily exists into the loading/unloading control system to react accordingly.
[0006] Whilst some prior art methods have attempted to make use of range sensors and other imaging devices in order to prepare a three dimensional model of the ship, drawbacks in these methods have yet to see widespread adoption of autonomous ship loading/unloading operations. The major drawback of such systems is motion blur in the three dimensional image data. Standard three dimensional mapping systems require the object to be stationary throughout a scan of the object. When there is movement of the target or sensor during the scan, motion blur is experienced and three dimensional image is distorted.
[0007] Throughout this specification, unless the context requires otherwise, the word "comprise" or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.
SUMMARY OF INVENTION
[0008] In accordance with a first aspect of the present invention, there is provided a method for creating a three dimensional model of an object that is mobile relative to a location, the method comprising: creating a template for the object using range sensor data from one or more ranging devices; capturing contour scan data of the object using range sensor data from one or more contour scanning devices; and combining the contour scan data of the object and the object template to create the three dimensional model of the object relative to the location.
[0009] In one form of the present invention the object is mobile and the location is stationary. In an alternative form of the present invention, the object is stationary and the location is mobile. In an alternative form of the present invention, both the object and the location are mobile. The term stationary here should be understood to mean in a fixed position relative to the earth.
[0010] As would be understood by a person skilled in the art, a three dimensional model is a mathematical representation of the three dimensional surfaces of an object. The three dimensional model can be rendered as a two dimensional image or used in a computer simulation of physical phenomena.
[0011] As would be understood by a person skilled in the art, contour scan data consists of three dimensional positional information which is used to capture the shape of the object. This can range from a few representative points as desired, to many tens of thousands forming a detailed image. The collected data can then be used to construct a digital three dimensional model. Whilst the contour scan data alone may be sufficient to produce a three dimensional model of a stationary object, motion blur often makes it unsuitable for objects with inconsistent movement. The inventors of the present invention have discovered that the combination of the contour scan data and the template allows for the creation of an accurate three dimensional model, even when the object is not stationary during the contour scanning.
[0012] As would be understood by a person skilled in the art, a template is a pattern used as a guide for alignment of data. More specifically, the template aligns the data from the one or more contour scanning devices to the object. Preferably, the template is a rigid reference target of any shape which is representative of the object's position and orientation. At a minimum, the template is two lines formed by three non-collinear points. More preferably, the template is a representative box. Still preferably, the template is a bounding box. As would be understood by a person skilled in the art, a bounding box is a volume which completely contains the selected portions of the object. Still preferably, the bounding box has a cuboid geometry. [0013] As would be understood by a person skilled in the art, combining the contour scan data of the object and the object template, more specifically comprises consolidating sensor data from both the ranging and contour scanning devices into an effective and coherent three dimensional model of the object relative to the location. More preferably, contour scan data of the object is rotated and translated to align to the object template. The inventors have determined that the combination of the contour scan data and the object template allows for the contour scan data to be rotated and translated by the object template position and orientation to account for any changes to the position and orientation of the object with respect to the location. In one form of the present invention, the three dimensional model is a representative box. In another form of the present invention, the three dimensional model is a bounding box.
[0014] Preferably, the range sensor data more specifically comprises direction and distance information.
[0015] In one form of the present invention, scan data from the one or more contour scanning devices may also be used to create the template of the object. In one form of the present invention, scan data from the one or more ranging devices may also be used to capture contour scan data of the object.
[0016] In one form of the present invention, at least one of the one or more ranging devices is in a fixed position relative to the location. In an alternative form of the present invention, at least one of the one or more ranging devices is mobile relative to the location. The inventors have determined that even if the one or more ranging sensors are mobile with respect to the location, the range sensor data captured can still be used to create the object template, provided that the location of the ranging sensors with respect to the location is known.
[0017] In one form of the present invention, the step of: creating a template for the object using range sensor data from one or more ranging devices at the location, more specifically comprises: creating a template for the object using range sensor data from two or more ranging devices at the location.
[0018] In one form of the present invention, at least one of the one or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the one or more ranging are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more ranging devices are Radio Detection and Ranging (RADAR) devices.
[0019] In one form of the present invention, at least one of the one or more contour scanning devices is in a fixed position relative to the location. In an alternative form of the present invention, at least one of the one or more contour scanning devices is mobile relative to the location. Where two or more contour scanning devices are used it will be understood that a combination of fixed and mobile contour scanning devices can be used.
[0020] In one form of the present invention, at least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, the one or more contour scanning devices are RADAR devices.
[0021] In one form of the present invention, comparison of range sensor data from consecutive scans of the one or more ranging devices is used to calculate the movement of the object.
[0022] Preferably, the comparison of the range sensor data from the one or more ranging devices is carried out by a processer configured to calculate a positional relationship between the location and the object.
[0023] Preferably, the dimensions of the object are calculated from the three dimensional model of the object. [0024] In one form of the present invention, the object has a body, with at least one rigid region uniquely identifiable from other regions. Preferably, the step of creating an object template, more specifically comprises directing one or more ranging devices towards rigid region(s) of the object to establish the template. More preferably, one or more ranging devices are dedicated to capturing rigid region(s) of the object to establish the template.
[0025] As would be understood by a person skilled in the art, range sensor data sets comprise direction and distance data, preferably in the form of azimuth, elevation and distance. In one form of the present invention, the step of creating an object template, more specifically comprises the capture of at least three non-collinear points of each rigid regions key features to establish a template.
[0026] In one form of the present invention, where the object comprises two ends, the step of creating an object template more specifically comprises directing at least one of the one or more ranging devices towards each of the two ends of the object to capture end range data sets. By directing the ranging devices towards each of the ends of the object, template ends can be created. As would be understood by a person skilled in the art, the combination of the template end data with other range data sets allows for a more accurate template to be established.
[0027] In one form of the present invention, where the object more specifically comprises a substantially flat top surface, the step of creating an object template more specifically comprises using at least one of the one or more ranging devices to capture where the body meets the top surface to create a template edge. As would be understood by a person skilled in the art, the combination of the template edge data with other range data sets allows for a more accurate template to be established.
[0028] In one form of the present invention, the step of creating the template enables movement of the object across multiple axes to be tracked. Such movements include up, down, left, right, forward, backward, yaw, pitch and roll of the object.
[0029] The inventors have discovered that motion blur is overcome by combining the contour scan data with the object template. By keeping the template fixed in mapping space, any change in position or orientation in the object can be addressed by rotating and translating the contour scan data relative to the template. Any change in position or orientation of the contour scanning device can also be dealt with similarly by adjusting position relative to the template.
[0030] In accordance with a second aspect of the present invention, there is provided a method for creating a three dimensional model of a ship relative to a quay, the method comprising, creating a ship template using range sensor data from two or more ranging devices; capturing contour scan data of the ship using range sensor data from one or more contour scanning devices; combining the contour scan data of the ship and the ship template to create the three dimensional model of the ship relative to the quay.
[0031] As would be understood by a person skilled in the art, a template is a pattern used as a guide for alignment of data. More specifically, the ship template aligns the data from the one or more contour scanning devices to the ship. Preferably, the ship template is a rigid reference target of any shape which is representative of the ships's position and orientation. At a minimum, the ship template is two lines formed by three non-collinear points. More preferably, the ship template is a representative box. Still preferably, the ship template is a bounding box. As would be understood by a person skilled in the art, a bounding box is a volume which completely contains the selected portions of the ship. Still preferably, the bounding box has a cuboid geometry.
[0032] In one form of the present invention, scan data from the one or more contour scanning devices may also be used to capture range sensor data. In one form of the present invention, scan data from the two or more ranging devices may also be used to capture contour scan data of the ship.
[0033] In one form of the present invention, the three dimensional model of a ship is a ship hull bounding box. [0034] In one form of the present invention, comparison of range sensor data from consecutive scans of the two or more ranging devices is used to calculate the movement of the ship.
[0035] Preferably, the comparison of the range sensor data from the two or more ranging devices is carried out by a processer configured to calculate a positional relationship between the quay and the ship.
[0036] Preferably, at least one of the two or more ranging devices is in a fixed position relative to the quay. In one form of the present invention, the quay is stationary. In a second form of the present invention, the quay is a floating platform.
[0037] In one form of the present invention, at least one of the two or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the two or more ranging devices are three dimensional LiDAR devices. In an alternative form of the present invention, the two or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the two or more ranging devices are RADAR devices.
[0038] In one form of the present invention, at least one of the one or more contour scanning devices are in a fixed position relative to the quay. In an alternative form of the present invention, at least one of the one or more contour scanning devices is mobile. Where two or more contour scanning devices are used it will be understood that a combination of stationary and mobile contour scanning devices can be used.
[0039] In one form of the present invention, at least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more contour scanning devices are RADAR devices.
[0040] Preferably, the three dimensional model of the ship allows for the calculation of the dimensions of the ship. [0041] Preferably, the ship has a hull, with a bow, stern and deck.
[0042] In one form of the present invention, the step of creating a ship hull bounding box template, more specifically comprises directing one or more ranging devices towards each of the bow and stern of the ship to capture bow and stern range data sets. By directing the ranging devices towards each of the bow and stern, the bounding box template ends can be created.
[0043] In one form of the present invention, the step of creating a ship hull bounding box template, more specifically comprises using one or more ranging devices to capture where the hull meets the deck amidship to establish a bounding box edge.
[0044] In one form of the present invention, the step of creating a ship hull bounding box template, more specifically comprises, combining the bounding box ends with the bounding box edge to create the upper and side planar face of the bounding box.
[0045] As would be understood by a person skilled in the art, once the ship hull bounding box template has been created, the dimensions of the ship can be calculated.
[0046] In one form of the present invention, the step of creating the ship hull bounding box template enables movement of the ship across multiple axes to be tracked. Such movements include heave, sway, surge, yaw, pitch and roll. Additionally, water level measurement enables freeboard and draft of the ship to be calculated.
[0047] The inventors have discovered that perceptual aliasing and motion blur is overcome by combining the contour scan data with the ship hull bounding box template. By keeping the bounding box template fixed in mapping space, any change in position or orientation in the ship can be addressed by rotating and translating the scan data relative to the bounding box template. Any change in position or orientation of the contour scanning device can also be dealt with similarly by positioning relative to the bounding box template.
[0048] In accordance with a third aspect of the present invention, there is provided an apparatus for creating a three dimensional model of an object that is mobile relative to a location, the apparatus comprising: one or more ranging devices to capture template data; one or more contour scanning devices to capture contour data; and at least one processor to combine the template data and the contour data to produce the three dimensional model of the object relative to a location.
[0049] Preferably, at least one of the one or more ranging devices is in a fixed position relative to the location.
[0050] Preferably, the apparatus comprises two or more ranging devices.
[0051] In one form of the present invention, at least one of the one or more ranging devices are Light Detection and Ranging (LiDAR) devices. More preferably, at least one of the one or more ranging devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more ranging devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more ranging devices are RADAR devices.
[0052] In one form of the present invention, at least one of the one or more contour scanning devices is in a fixed position relative to the location. In an alternative form of the present invention, at least one of the one or more contour scanning devices are mobile. Where two or more contour scanning devices are used it will be understood that a combination of stationary and mobile contour scanning devices can be used.
[0053] In one form of the present invention, at least one of the one or more contour scanning devices are LiDAR devices. More preferably, at least one of the one or more contour scanning devices are three dimensional LiDAR devices. In an alternative form of the present invention, the one or more contour scanning devices are multiple 2D LiDAR devices. In another form of the present invention, at least one of the one or more contour scanning devices are RADAR devices.
[0054] In one form of the present invention the object is mobile and the location is stationary. In an alternative form of the present invention, the object is stationary and the location is mobile. In an alternative form of the present invention, both the object and the location are mobile. The term stationary here should be understood to mean in a fixed position relative to the earth.
BRIEF DESCRIPTION OF THE DRAWINGS
[0055] Further features of the present invention are more fully described in the following description of a non-limiting embodiment thereof. This description is included solely for the purposes of exemplifying the present invention. It should not be understood as a restriction on the broad summary, disclosure or description of the invention as set out above. The description will be made with reference to the accompanying drawings in which:
Figure 1 is an upper perspective view of the use of the method of the present invention to create a three dimensional mapping of a ship;
Figure 2 is a reversed upper perspective view of the method shown in Figure 1 ;
Figure 3 is a perspective view of the use of the method of the present invention in calculating ship orientation and motion;
Figure 4 is a top view of the use of the method of the present invention to account for the orientation of the ship when capturing contour data;
Figure 5 is an upper perspective view of the use of the method of the present invention to align the bounding box template;
Figure 6 is an upper perspective view illustrating the problems associated with perceptual aliasing;
Figure 7 conceptually illustrates a series of experiments performed to
demonstrate the use of the method of the present invention;
Figure 8 documents experiment outcomes for the use of the method of the present invention for a first pass with ship yaw to starboard; and Figure 9 documents experiment outcomes for the use of the method of the present invention for a second pass with ship yaw to port.
DESCRIPTION OF EMBODIMENTS
[0056] Those skilled in the art will appreciate that the invention described herein is susceptible to variations and modifications other than those specifically described. The invention includes all such variation and modifications. The invention also includes all of the steps and features referred to or indicated in the specification, individually or collectively and any and all combinations or any two or more of the steps or features.
[0057] In Figures 1 to 5, there is shown an arrangement for the three dimensional mapping of an object, for example a ship 125 relative to a location, for example a quay 100 using data from a plurality of sensors. The plurality of sensors comprises of a bow facing three dimensional LiDAR ranging sensor 110, a stern facing three dimensional LiDAR ranging sensor 115 (together the quay LiDAR ranging sensors) and a three dimensional LiDAR contour scanning sensor 120. In the embodiment shown in Figures 1 to 5, the three dimensional LiDAR contour scanning sensor 120 is positioned on ship loading/unloading equipment 105. Each of the sensors 110, 115 and 120 output a three dimensional point cloud relative to each sensor.
[0058] The position and orientation of the quay LiDAR sensors 110, 115 are captured during installation relative to arbitrary global origin 101. The position and orientation of a LiDAR contour scanning sensor is able to be calculated from the loading/unloading equipment 105 position by way of encoders and/or GPS. The global position of each of the points can be obtained by rotating and translating each point cloud dataset by respective scanners position and orientation relative to the global origin 101. A Cartesian coordinate system is established, aligning the ΧΛ7Ζ axis to the length, height, width of the quay. This forms the framework around which all rotations, translations and geometric line/plane calculations are computed.
[0059] The position and orientation of bow LiDAR 110 and stern LiDAR 115 are such that their field of view also captures within the point cloud dataset, points on the quayside amidship planar hull section 135. Although entire darkened planer hull section 35 is highlighted in the Figures, it is not necessary to capture three dimensional point cloud data of the entire hull section only a sufficient number of points to confidently establish the plane on which the hull section lies. At least three non-collinear points 136, 137, 138 are required to establish the plane. In practice, more non-collinear points may be obtained, enabling filtering, refinement and cross checking to be performed.
[0060] Many approaches exist for selecting three non-collinear points on the hull section 135. Numerous algorithms exist including open source software such as Point Cloud Library for segmentation of point cloud data into homogeneous regions, in particular planes (http://pointclouds.org/documentation/tutorials/planar_segmentation.php). Alternatively, with the bow and stern LiDAR scanning environment relatively known, being a vessel of rectangular volume, many shortcuts can be applied to ship dataset to yield the planar hull section 135. For example traversing scan data outward from the quay towards the ship, points just beyond the quay, can be tested for ship planer hull section and deck/hull edge.
[0061] Instead of three dimensional LiDAR devices, several 2D LiDAR sensors could alternatively be used in various horizontal and vertical plane combinations to obtain the same information. For example an array of sensors could be directed towards the ship's hull along the quay to allow for any occlusion that may need to be overcome or to allow for varying vessel sizes if handling both small to large vessels where a one size fits all solution may be inappropriate. An array of sensors, be it three dimensional or 2D, may also be used to provide greater flexibility as to ship berthing location. An array of sensors along the quay also allows for greater availability and safety where a voting system can be used to provide cross checking and fault tolerance to any particular LiDAR sensor failure which is very important for fully autonomous control.
[0062] Alternate embodiments can also involve positioning the ranging or scanning devices on UAVs, as long as their position and orientation is accurately known, typically via GPS. Communication does not need to be wired and processing of data does not need to be central. Key feature point extraction of bow, stern and deck / planer hull edge can be distributed, requiring only relatively small amounts of data to be exchanged of key points, which can be achieved wirelessly. This is in contrast to the communication of large point cloud datasets where wired solutions would be required, typically fibre / Gigabit Ethernet, given bandwidth requirements.
[0063] Detailed in Figure 3 is one possible set of points that establishes amidship rectangular hull section 135. Hull deck edge 134 formed by line between points 136 and 137 enables the ships yaw 160 and pitch 162 to be immediately calculated relative to location using simple trigonometry to yield angle from global X/Y/Z axes/planes conveniently aligned to horizontal/vertical quay length, height, width edges.
[0064] The third non-collinear point near the vessel hull waterline 138 can then be used to establish the hull plane 135 and simple trigonometry yields the ship roll angle relative to location.
[0065] Once rotational angles are known, the bow and stern LiDAR 110, 115 point cloud dataset can be rotated and translated by calculated yaw, pitch and roll, resulting in ship dataset aligned to horizontal/vertical quay length (X), height (Y), width (Z) edges. The extreme points along X axis 130, 140 can then be used to establish bounding box 145 end planes perpendicular to both vessel planer hull 135 and deck plane 125. With end planes 130 and 140 established, vessel surge 166, heave 168 and sway 170 motion relative to location is easily established from changes to bounding box 145 X/Y/Z values over time in the Cartesian space relative to the origin 101. When combined with sensors to detect water levels, ship freeboard can be calculated. When combined with the ship's deadweight scale data, the draft can also be calculated.
[0066] The bow and stern LiDAR 110, 115 dataset extreme points along X axis, combined with analysis of symmetry, with assumed hull symmetry about ship centreline in the vertical plane, yield vessel centreline end-points. Analysis of bow LiDAR 110 dataset, in particular fitting to an elliptical model, such as for example the Nonlinear Regression and Curve Fitting (NLREG) analysis, (detailed at http://www.nlreg.com/ellipse.htm), yields centreline bow end-point and vessel beam (width of ellipse). Analysis of bow symmetry to obtain vessel beam overcomes occlusion of vessel far side planar hull edge which is likely to be obscured from bow LiDAR scanner view. Similar analysis of the LiDAR 115 data set can be used to obtain stern centreline end-point. [0067] With yaw, pitch, roll, planer hull/deck edge (line 134 formed by points 136 and 137), centreline end-points and beam calculated, bounding box 145 constructed around the hull can be used for combining of all contour scans made by the LiDAR contour scanning sensor 120. This alignment allows the image stitching process for the three dimensional contour mapping of vessel superstructure to be simple.
[0068] As would be understood by a person skilled in the art, perceptual aliasing is when two or more images places can be perceived as the same. As example of this is shown in Figure 6, where the image perceived by the scan data 450 could be of several locations 455, 456, 457 and from the scan data capture alone 450, the control system is not able to distinguish between them 455, 456, 457. The inventors have discovered that by using one or more ranging devices, perceptual aliasing is overcome.
[0069] As illustrated in Figure 4 as the ends of the vessel/bounding box 145 are known, perceptual aliasing problems are resolved. Knowing the position and orientation of the vessel/bounding box 145 and position and orientation of the contour scanning LiDAR (from loading/unloading equipment 105 positional devices; typically encoders/GPS), every scanning ray 200 and ranging reflection 205 (only a single ray / reflection point is shown for ease of illustration) can immediately and accurately be placed (relative to bounding box) without start/end of ship having to be traversed/scanned. With use of calculated bounding box template there is no ambiguity as to the location of any ray nor the image perceived by the scan data 450 even if capture is identical to several locations 455, 456, 457 as illustrated in Figure 6.
[0070] The top elevated and rear views in Figure 4 indicates a single ray 200 being cast from the contour LiDAR scanner onboard the loading/unloading equipment 105 where the scanner would typically output a direction and distance to the reflection 205. For ease of illustration, the direction of the ray 200 is arbitrarily chosen to be parallel with the Z axis. Without the use of the template techniques disclosed, it is not possible to exactly place this reflection point 205 in an absolute sense without the bounds of vessel known. The many ranging points would only offer useful information once all points have been placed relative to each other and bounds established. With the bounding box template 145, applying the same rotations and translations to align bounding box to X/Y/Z axis to ray 200 (Figure 4 bottom elevated and rear view) with ray origin (fixed location onboard loading/unloading equipment shifted by equipment location on quay) and ray end 205 (distance and direction from origin) known, its absolute position within the bounds of the vessel can be immediately determined.
[0071] As discussed previously, motion blur is a problem faced in scanning the ships superstructure, especially when the scanner is located on mobile equipment. Even while moored, sway and other motions can be induced by passing ships especially if mooring equipment does not have automatic tension functionality. Limited ability to progressively scan while the ship maybe moving also means that scanning must wait till the vessel is completely moored, where significant time could potentially be saved if scanning could be started while ship is in its final stages of berthing. By applying the same rotations and translations needed to align bounding box template to X/Y/Z axis, to LiDAR contour scanning rays, any movement of the vessel is immediately captured. Calculations and relative position of mobile scanning rays are able to be adjusted immediately, automatically handling the image stitching problem and motion blur in creating overall composite.
[0072] Figure 5 illustrates the problems associated with motion blur for an example scenario of two scanning images involving receding tides and the vessel falling 310 relative to the quay loading/unloading equipment. Light grey/horizontal stripe pattern 325 is the initial image seen by the contour LiDAR scanner and dark grey/vertical stripe pattern represents is the final image seen 330. The top perspective and rear views illustrate the use of the bounding box template functionality to offset 320 origin and direction of contour scanning rays by fixing bounding box 300 in mapping space. The fall of the vessel relative to the quay and loading/unloading equipment is the equivalent of holding stationary the vessel / bounding box template 300 and adjusting the contour scanning LiDAR 315 by raising it 320. It can be seen from top perspective and rear views, that although environment visible from raised location is different 325/330, the data or image within the pyramid field of view common to both positions 330 align/overlap.
[0073] Figure 5 bottom perspective and rear views illustrates the problem faced when ships position and orientation are not accurately known without the use of the template method of the present invention. The bottom graphics show the differencing hull bounding boxes 305, 310 (for illustration of ship position only) with all that is known by the software system is the stationary LiDAR contour scan 315 image. Although the environment common to both positions is the same as top graphics, it can be seen that the images seen by the system no longer overlap/align. Very complex techniques are needed to shift bottom LiDAR images prior/post receding tides to align common regions. This is further complicated when rotation and translation motion is experienced across multiple axis.
Example
[0074] A series of tests were performed on a vessel prototype to demonstrate the use of the method of the present invention to create a three dimensional model of the vessel prototype relative to a simulated quay, with contour scans adjusted to compensate for induced motion blur and perceptual aliasing. The results of the tests are shown in Figures 7 to 9. The scanner and vessel prototype configuration in the experiments were chosen to align with the configuration shown in Figures 1 to 5. In this configuration, contour scans were taken from the simulated quay directed towards the top and side of the vessel prototype. A pass is understood to mean the collection of contour scans obtained along the longitudinal length of the vessel from the simulated quay. 2D plots of the three dimensional mapping space were prepared to aid illustration of the methodology.
[0075] Illustrated in Figures 7(a) - (d) are the key experiment steps and outcomes. In figure 7a, contour scans 510 show the image captured by a first pass 505 over the length of the vessel prototype with a yaw to starboard represented by line 500. In Figure 7b, contour scans 530 show the image captured by a second pass 525 over the length of the vessel prototype with a yaw to port side represented by line 520. The distorted composite image 535 is the result of combining contours scans 510 and 530.
[0076] Figures 7c and 7d show contour scans captured using the method of the present invention. In Figure 7c, contour scans 550 show the image captured by a first pass 545 over the length of the vessel prototype with a yaw to starboard represented by the line 540, same as that previous of 500 and 505, but image rotated in mapping space such that the established bounding box is aligned on the vertical/horizontal axis. In Figure 7d, contour scans 570 show the image captured by a second pass 565 over the length of the vessel prototype with a yaw to port side represented by the line 560, same as that previous of 520 and 525, but image rotated in mapping space such that the established bounding box is aligned on the vertical/horizontal axis. The rotations in mapping space are evident from the slope of the contour scan lines/rays in 550 and 570. The correct composite image 575 results when combining contours scans 550 and 570.
[0077] Figure 8 contains experiment output for the first pass corresponding with yaw to starboard and Figure 7c. Points 600 indicate loading/unloading equipment and direction of contour scanning rays. Points 605 and 610 indicate hull data captured by ranging scanners 110 and 115 respectively. Points 620 indicate filtered hull data rotated and translated to align bounding box in mapping space. Points 615 indicate same rotations and translations applied to contour scan ray direction 600. Points 625 show three dimensional contour scans for the first pass.
[0078] Figure 9 contains experiment output for the second pass corresponding with yaw to port and Figure 7d. Points 600 indicate loading/unloading equipment and direction of contour scanning rays. Points 605 and 610 indicate hull data captured by ranging scanners 110 and 115 respectively. Points 620 indicate filtered hull data rotated and translated to align bounding box in mapping space. Points 615 indicate same rotations and translations applied to contour scan ray direction 600. Points 630 show three dimensional contour scans for the second pass with correct object shape captured and data correctly overlaying when combined with first pass contour scans 625.

Claims

1. A method for creating a three dimensional model of an object that is mobile
relative to a location, the method comprising: creating a template for the object using range sensor data from one or more ranging devices; capturing contour scan data of the object using range sensor data from one or more contour scanning devices; and combining the contour scan data of the object and the object template to create the three dimensional mapping of the object relative to the location.
2. A method according to claim 1 , wherein the range sensor data more specifically comprises direction and distance information.
3. A method according to claim 1 or 2, wherein the template is a rigid reference target which is representative of the object's position and orientation.
4. A method according to claim 3, wherein the rigid reference target is a
representative box.
5. A method according to claim 4, wherein the three dimensional model is a
representative box.
6. A method according to any one of proceeding claims, wherein the step of: creating a template for the object using range sensor data from one or more ranging devices at the location, more specifically comprises: creating a template for the object using range sensor data from two or more ranging devices at the location.
7. A method according to any one of the preceding claims, wherein at least one of the one or more ranging devices are Light Detection and Ranging (LiDAR) devices.
8. A method according to any one of claims 1 to 6, wherein at least one of the one or more ranging devices are Radio Detection and Ranging (RADAR) devices.
9. A method according to any one of the preceding claims, wherein comparison of range sensor data from consecutive scans of the one or more ranging devices is used to calculate the movement of the object.
10. A method according to any one of the preceding claims, wherein the three
dimensional model of the object allows for the calculation of the dimensions of the object.
11. A method according to any one of the preceding claims, wherein the step of
creating the template enables movement of the object across multiple axes to be calculated and tracked.
12. A method according to any one of the preceding claims, wherein contour scan data of the object is rotated and translated to align to the object template.
13. A method for creating a three dimensional model of a ship relative to a quay, the method comprising: creating a ship template using range sensor data from two or more ranging devices; capturing contour scan data of the ship using range sensor data from one or more contour scanning devices; combining the contour scan data of the ship and the ship template to create the three dimensional model of the ship relative to the quay.
14. A method according to claim 13, wherein the template is a representative box of the ship.
15. A method according to claim 13, wherein the model is a representative box of the ship.
16. A method according to any one of claims 13 to 15, wherein the step of creating the template enables heave, sway, surge, yaw, pitch and roll movement of the ship to be calculated and tracked.
17. An apparatus for creating a three dimensional model of an object relative to a location, the apparatus comprising: one or more ranging devices to capture template data; one or more contour scanning devices to capture contour data; and at least one processor to combine the template data and the contour data to produce the three dimensional model of the object relative to a location.
18. An apparatus according to claim 17, wherein the apparatus comprises two
ranging devices.
19. An apparatus according to claim 17 or 18, wherein the one or more range sensor data devices are Light Detection and Ranging (LiDAR) devices.
20. An apparatus according to any one of claims 17 to 19, wherein the one or more range sensor data devices are Radio Detection and Ranging (RADAR) devices.
PCT/AU2017/050516 2016-05-30 2017-05-30 Three dimensional object mapping WO2017205916A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2017274080A AU2017274080B2 (en) 2016-05-30 2017-05-30 Three dimensional object mapping

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2016902054A AU2016902054A0 (en) 2016-05-30 3D Mobile Object Mapping
AU2016902054 2016-05-30

Publications (1)

Publication Number Publication Date
WO2017205916A1 true WO2017205916A1 (en) 2017-12-07

Family

ID=60479485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2017/050516 WO2017205916A1 (en) 2016-05-30 2017-05-30 Three dimensional object mapping

Country Status (2)

Country Link
AU (1) AU2017274080B2 (en)
WO (1) WO2017205916A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021222475A1 (en) * 2020-04-28 2021-11-04 Atherton Dynamics, Llc. Calculation of roll period for a vessel
WO2022006629A1 (en) * 2020-07-07 2022-01-13 Amlab Pty Ltd Mapping of a crane spreader and a crane spreader target

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092643A1 (en) * 2009-04-15 2012-04-19 Konecranes Plc System for the identification and/or location determination of a container handling machine
US20140107971A1 (en) * 2011-05-20 2014-04-17 Optilift As System, Device And Method For Tracking Position And Orientation Of Vehicle, Loading Device And Cargo In Loading Device Operations
US9128188B1 (en) * 2012-07-13 2015-09-08 The United States Of America As Represented By The Secretary Of The Navy Object instance identification using template textured 3-D model matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092643A1 (en) * 2009-04-15 2012-04-19 Konecranes Plc System for the identification and/or location determination of a container handling machine
US20140107971A1 (en) * 2011-05-20 2014-04-17 Optilift As System, Device And Method For Tracking Position And Orientation Of Vehicle, Loading Device And Cargo In Loading Device Operations
US9128188B1 (en) * 2012-07-13 2015-09-08 The United States Of America As Represented By The Secretary Of The Navy Object instance identification using template textured 3-D model matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZALEWSKI, P.: "Ship's Horizontal Plane Determination by Means of Three Laser Range Sensors in Pilot Navigation and Docking System", JOURNAL OF KONBIN, vol. 24, no. 1, December 2012 (2012-12-01), pages 161 - 169, XP055442962, Retrieved from the Internet <URL:https://www.degruyter.com/downloadpdfj/jok.2012.24.issue-1/jok-2013-0062/jok-2013-0062.pdf> [retrieved on 20170619] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021222475A1 (en) * 2020-04-28 2021-11-04 Atherton Dynamics, Llc. Calculation of roll period for a vessel
GB2609812A (en) * 2020-04-28 2023-02-15 Atherton Dynamics Llc Calculation of roll period for a vessel
WO2022006629A1 (en) * 2020-07-07 2022-01-13 Amlab Pty Ltd Mapping of a crane spreader and a crane spreader target

Also Published As

Publication number Publication date
AU2017274080B2 (en) 2022-03-24
AU2017274080A1 (en) 2019-01-17

Similar Documents

Publication Publication Date Title
Wei et al. A non-contact measurement method of ship block using image-based 3D reconstruction technology
US20230348237A1 (en) Mapping of a Crane Spreader and a Crane Spreader Target
Hurtós et al. Autonomous detection, following and mapping of an underwater chain using sonar
RU2751383C2 (en) Automated weight determination based on the ship draft
AU2017274080B2 (en) Three dimensional object mapping
CN110182620B (en) Scanning identification system of unmanned chain bucket type continuous ship unloader and working method
CN103336282B (en) A kind of cabin locating device and localization method thereof automatically
EP3382335B1 (en) System, method and computer program product for determining a position and/or attitude of an offshore construction
CN111433143B (en) Unloading device
CN112150388A (en) Continuous ship unloader ship and material identification sensing method
CN113340287B (en) Cabin hatch identification method for ship loader
JP2019020131A (en) Raw material mountain measurement method and raw material mountain measurement system
JP2022523323A (en) Systems and methods for determining the relative position and motion of an object
Wang et al. Acoustic camera-based pose graph slam for dense 3-d mapping in underwater environments
CN111344238B (en) Unloading device
CN111328318B (en) Unloading device
CN115131720A (en) Ship berthing assisting method based on artificial intelligence
CN112102396B (en) Method, device, equipment and storage medium for positioning vehicle under bridge crane
CN209871804U (en) Scanning identification system of unmanned chain bucket type continuous ship unloader
KR20110066764A (en) Spreader control system of crane for container
CN107284608B (en) Method for removing field obstacle in transverse repair area
JP2022158930A (en) Cargo unloading device, calibration method of the same, and calibration program of the same
JP2022155993A (en) Cargo unloading device, calibration method of the same, and calibration program of the same
US20230382693A1 (en) Apparatus and method for determining the spatial position of pick-up elements of a container
JP2022155994A (en) Cargo unloading device, calibration method of the same, and calibration program of the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17805396

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017274080

Country of ref document: AU

Date of ref document: 20170530

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 17805396

Country of ref document: EP

Kind code of ref document: A1