AU2020217371A1 - A method of surveying a target - Google Patents

A method of surveying a target Download PDF

Info

Publication number
AU2020217371A1
AU2020217371A1 AU2020217371A AU2020217371A AU2020217371A1 AU 2020217371 A1 AU2020217371 A1 AU 2020217371A1 AU 2020217371 A AU2020217371 A AU 2020217371A AU 2020217371 A AU2020217371 A AU 2020217371A AU 2020217371 A1 AU2020217371 A1 AU 2020217371A1
Authority
AU
Australia
Prior art keywords
images
target
nodes
surveying
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2020217371A
Inventor
Huiju Wi
Arnie Wolff
James Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magnetic South Pty Ltd
Original Assignee
Magnetic South Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019902920A external-priority patent/AU2019902920A0/en
Application filed by Magnetic South Pty Ltd filed Critical Magnetic South Pty Ltd
Publication of AU2020217371A1 publication Critical patent/AU2020217371A1/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0044Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

A method of surveying a target, the method comprising dividing the target into a plurality of nodes; obtaining a plurality of images at each of the plurality of nodes using one or more cameras; combining the plurality of images to form a composite 3600 image about each of the nodes; combining each of the composite 360° images to form a VR image representation of the target; and displaying the VR image representation of the target on a user device. 3/4 300 30 3 322 Figure...... 3.

Description

3/4
300 3
322
Figure...... 3.
A METHOD OF SURVEYING A TARGET TECHNICAL FIELD
[0001] The present invention relates to a method of surveying a target. More particularly, the present invention relates to a method of surveying a terrain surface, a feature in a terrain surface, or a man-made object or structure.
BACKGROUND ART
[0002] Traditionally, aerial surveys of large areas, such as farmland, national parks, forests, plantations, mine sites, quarries or construction sites, have been performed in fixed wing aircraft by human observers taking photos of the area using a camera. Consequently, these surveys can be expensive and can often produce poor quality results.
[0003] In recent times, it has become more common to conduct aerial surveys using a camera mounted to an unmanned aerial vehicle (UAV). The UAV may be a fixed wing UAV or may be the type more commonly referred to as a "drone".
[0004] When the area to be surveyed is in a remote location, or is difficult to access (for instance, due to rough terrain), decision makers may not have a complete view or appreciation of the area and how it is currently being utilised. Therefore, the use of a UAV to conduct aerial surveys in these situations may be advantageous.
[0005] However, even in aerial surveys conducted by a UAV, the quality of images captured is often low, meaning that the images typically do not provide sufficient detail for certain actions, such as being able to conduct population counts in fauna or livestock surveys, or being able to identify maintenance or safety issues in (for example) mines or quarries.
[0006] Thus, there would be an advantage if it were possible to provide an improved method and system of surveying a target that ameliorates the aforementioned problems.
[0007] It will be clearly understood that, if a prior art publication is referred to herein, this reference does not constitute an admission that the publication forms part of the common general knowledge in the art in Australia or in any other country.
SUMMARY OF INVENTION
[0008] The present invention is directed to a method of surveying a target, which may at least partially overcome at least one of the abovementioned disadvantages or provide the consumer with a useful or commercial choice.
[0009] With the foregoing in view, the present invention in one form, resides broadly in a method of surveying a target, the method comprising: dividing the target into a plurality of nodes; obtaining a plurality of images at each of the plurality of nodes using one or more cameras; combining the plurality of images to form a composite 3600 image about each of the nodes; combining each of the composite 360° images to form a VR image representation of the target; and displaying the VR image representation of the target on a user device.
[0010] In a preferred embodiment of the invention, the step of obtaining a plurality of images to form a composite 360° image about each of the nodes may be performed as an aerial survey. Thus, in this embodiment of the invention, the one or more cameras may be associated with a UAV. It will be understood that the term "UAV" may include any unmanned aerial vehicle, including fixed wing vehicles. More preferably, however, the one or more cameras may be associated with a type of UAV commonly referred to as a drone. It will be understood that one of the differences between a fixed wing UAV and a drone is the ability of a drone to hover at a particular point in space. Thus, in this specification, the term "drone" is used to refer to a UAV that is capable of hovering at a point in space. In a preferred embodiment of the invention, the one or more cameras may be associated with a drone. One or more drones may be used to survey the target.
[0011] The target may be any suitable type. For instance, the target may be a terrain surface (such as farmland, national park, forest, plantations, mine site, construction site, or the like), a feature in the terrain surface (such as land formations, water courses and bodies, vegetation, glaciers, caves, or the like), or a man-made object or structure (such as a bridge, a building, a road, property boundaries, a dam wall, an open-pit, a quarry, or the like). The target may be surveyed to gather information about the target and/or to create a map of the target. For instance, the information about the target may be used to establish a base-line condition of the target, to monitor the condition of the target or to plan future development of the target. The target may be surveyed to undertake an inventory of the target, such as for instance, surveying vegetation, vegetation structure, habitat structure, water reserves, land usage or degradation, or animal populations in the area. The information may be used to access a remote target, to access a target that may be too dangerous to physically access, or would normally require extensive training before it could be accessed, or for training purposes.
[0012] The method of surveying a target comprises dividing the target into a plurality of nodes. The target may be divided into a plurality of nodes by any suitable means. Preferably however, the target may be divided into a plurality of nodes in order to facilitate the surveying of the target. For instance, the plurality of nodes may be substantially located about the periphery of a building or structure to be surveyed. Alternatively, the plurality of nodes may be located about a three-dimensional space formed by an open-pit mine, quarry, gully, canyon or the like. However, it will be understood that the manner in which the target may be divided into a plurality of nodes may vary depending on a number of factors, such as the type of target to be surveyed, the size and dimensions of the target, the purpose of the survey and the type of camera to be used.
[0013] The target may be divided into a plurality of nodes by any suitable means. The target may be divided into a plurality of nodes automatically using a computer or similar computing device. More preferably, however, the target may be divided into a plurality of nodes manually by a user. Preferably, the manual task of dividing a target into a plurality of nodes may be performed by a user using a computer or similar computing device. The computer or similar computing device may transfer information regarding the plurality of nodes to a drone. Alternatively, the information regarding the plurality of nodes may be transferred to the computer or similar computing device responsible for operation of the drone.
[0014] The plurality of nodes may be any suitable form. For instance, the plurality of nodes may have a physical form (such as a cell tower) or a virtual form (such as a set of coordinates), or combinations thereof. Preferably, however, each of the plurality of nodes may be defined by x-, y- and z-axes or coordinates. Thus, the plurality of nodes may comprise points in space located at any suitable point from the boundaries of the target, as well as any suitable height above the target (or at least above a ground surface). In an embodiment of the invention, the plurality of nodes may be spaced apart. Preferably, the plurality of nodes may be arranged such that they define a multi-dimensional space about the target. The plurality of nodes may be spaced equidistantly from one another, or may be spaced apart at different distances from one another.
[0015] The plurality of nodes may serve as waypoints in a flight plan for a drone to optimise flight time of the drone and to obtain sufficient images to adequately cover the three-dimensional space about the target. Alternatively, the plurality of nodes may serve as waypoints in the VR image representation to allow the user to navigate to specific features about the target. For instance, a user may select a node to navigate to the selected node and observe the three dimensional space about the selected node or to interact with features located at the selected node, such as associated photos, annotations, charts, or the like. The plurality of nodes may serve as markers having known coordinates that can be used to increase the accuracy of the VR image representation of the target relative to its true position on the Earth. In a preferred embodiment, at least a portion of the nodes represent a point in space at which the one or more cameras take a plurality of images. More preferably, the one or more cameras take a plurality of images at each of the nodes.
[0016] The method for surveying a target may comprise obtaining one or more pieces of data from at least one node of the plurality of nodes. Data obtained from the plurality of nodes may be associated with the plurality of images. Any suitable data may be obtained. For instance, the data may include location or position data
(such as data from a GPS position sensor, geospatial attitude sensors, and the like), attitude data (such as data from a gyroscope, an accelerometer, a magnetometer, and the like), contact time with a drone, or combinations thereof. However, it will be understood that the type of data obtained from the plurality of nodes may vary depending on the type of node. In a preferred embodiment, the data may be associated with the plurality of images as metadata.
[0017] The method of surveying a target comprises obtaining a plurality of images at each of the plurality of nodes using one or more cameras. The one or more cameras may be of any suitable type. For instance, the one or more cameras may be a still camera, a video camera, a 360° camera, a stereo camera, an image capture technology (such as a charge coupled device), or combinations thereof. The one or more cameras may comprise any suitable lens (such as a wide angle lens) or filter (such as a UV filter, neutral density filter, polarising filter, or the like). The one or more cameras may be the same type of camera or, alternatively, the one or more cameras may be of different types. However, it will be understood that the type of camera may vary depending on a number of factors such as the type of target to be surveyed, the purpose of the survey and the type of images and resolution required. In addition, the type of camera may be dependent on the type of drone being used. By this it is meant that, for instance, the camera cannot be so large and/or heavy that it exceeds the payload capacity of the drone.
[0018] The one or more cameras may be tasked with surveying the same aspect of the target or, alternatively, the one or more cameras may be tasked with surveying different aspects of the target. For instance, when two or more cameras are used, the cameras may be mounted so as to create a horizontal offset such that the plurality of images may be obtained with a perspective offset. One or more cameras may be tasked with identifying and obtaining images of moving objects (such as vehicles, livestock, people, etc.) in the target while other cameras may be tasked with obtaining images of structures (buildings, land formations, terrain etc.) in the target.
[0019] The one or more cameras may be mounted to any suitable object. For instance, the one or more cameras may be located on the body of a user or mounted on a vehicle, an unmanned aerial vehicle (UAV) or drone, a manipulator mounted to a vehicle or drone, a surveyor's tripod, or the like. The camera may be mounted on any suitable portion of the object. Preferably, the one or more cameras may be mounted on a portion of the object so as to have a substantially unimpeded view of the target.
[0020] The drone may comprise one or more sensors. The one or more additional sensors may be of any suitable type. For instance, the sensor may be a distance sensor (such as RADAR, LiDAR, SONAR, laser range finders, or the like), a proximity sensor (such as infrared sensors, imaged-based sensors, sound or ultrasonic sensors, laser scanners, etc.), imaging technology (such as multispectral imaging sensors, thermal sensor, etc.), a navigation or orientation sensor (such as a satellite navigation system, a time-of-flight sensor, an accelerometer, an inertial navigation system, a microelectromechanical system, etc.), or the like. It is envisaged that in use, data from the one or more sensors may assist the drone by improving navigation, avoiding collisions, or the like. Alternatively, the data from the one or more sensors may provide additional information about the target.
[0021] The method for surveying a target may comprise obtaining one or more pieces of data from at least one camera of the one or more cameras. Data obtained from the one or more cameras may be associated with the plurality of images. The data may include data from the one or more cameras and/or the object to which the camera may be mounted. For instance, the data may include location or position data, attitude data, range data, time of image capture, multispectral data, or combinations thereof. In a preferred embodiment, the data may be in the form of metadata associated with the plurality of images.
[0022] The one or more cameras and/or the object to which the camera may be mounted may contact the computing device at regular or irregular periods of time to transmit images and data associated with the survey. More preferably, however, the cameras and/or the objects to which the cameras may be mounted may be adapted to continuously contact the computing device such that a user of the computing device is receiving essentially a live feed of the survey. Alternatively, the computing device may contact the cameras and/or the objects to which the cameras may be mounted to trigger the cameras and/or the objects to which the cameras may be mounted to transmit images and data associated with the survey. In a yet further embodiment, the images and data associated with the survey may be physically transferred when the cameras and/or the objects to which the cameras may be mounted returns to the launch site. In an embodiment of the invention, the one or more pieces of data may be transferred to a computing device and wherein based on the data received, the computing device and/or a user of the computing device may transfer one or more new survey tasks and/or one or more updated survey tasks to the at least one camera of the one or more cameras and/or the object to which the camera is mounted.
[0023] As previously stated, a plurality of images may be obtained at each of the plurality of nodes. The plurality of images obtained at each of the plurality of nodes may be of any suitable number. The number of images obtained at each of the plurality of nodes may differ depending on the node. Preferably, however, sufficient images may be obtained at each of the plurality of nodes such that when the plurality of images is combined, they represent a 3600 view about each of the nodes. However, it will be understood that the number of images obtained may vary depending on a number of factors, such as on the type of images and final resolution required, the type of target to be surveyed, the purpose of the survey and the type of camera and lens used.
[0024] Preferably, sufficient images may be taken at each of the plurality of nodes such that adjacent images may overlap. In an embodiment of the invention, each image of the plurality of images at least partially overlaps with one or more adjacent images. In a preferred embodiment, each image of the plurality of images has at least about 10% overlap with one or more adjacent images, more preferably between approximately 15% and 75% overlap, more preferably, between approximately 25% and 65% overlap. Preferably, each image of the plurality of images has between approximately 30% and 60% overlap with one or more adjacent images. However, it will be understood that the amount of overlap required between the plurality of images may vary depending on the quality of the images and the final resolution required, the type of target to be surveyed, and the type of camera and lens used.
[0025] The number of images required to achieve the desired degree of overlap between adjacent images will depend on a number of factors including, for instance, the field of vision of the one or more cameras, the distance from the target at which the one or more cameras are located when the images are taken and so on. However, it is envisaged that at least 10 images may be obtained by the one or more cameras at each of the nodes. More preferably, at least 15 images may be obtained by the one or more cameras at each of the nodes. Yet more preferably, at least 20 images may be obtained by the one or more cameras at each of the nodes. Still more preferably, at least 25 images may be obtained by the one or more cameras at each of the nodes. In a specific embodiment, 28 images may be obtained by the one or more cameras at each of the nodes.
[0026] Preferably, each of the images taken at a node are captured at the same altitude as one another. However, it is envisaged that at least one image at a node may be taken at a different altitude to the other images.
[0027] Similarly, it is preferred that the images taken at each of the nodes are captured at the same altitude to the images taken at each of the other nodes. In this situation, the combining of the composite 3600 images from each node will be relatively straightforward given that each of the composite 360° images will be taken at the same altitude as one another. However, there will be times when at least one of the composite 360° images may have been taken at a different altitude to the other composite 360° images. This may occur when, for instance, images must be captured at a different height due to the presence of a landform or object (such as a tree, hill or mountain, building, a gully or ravine, a mine or quarry or the like). Thus, images may be captured at higher or lower altitudes depending on the nature of the landform or object.
[0028] In other embodiments of the invention, the altitude at which images are captured at a node may be varied depending on the environmental conditions experienced. For instance, at certain locations, and at certain altitudes, updrafts or downdrafts may be experienced. When images are taken during updraft or downdraft conditions, the quality of the image may be reduced. For instance, the image may be blurred, out of focus, out of alignment and so on.
[0029] In some embodiments of the invention, the drone may be provided with one or more stabilising members (such as, but not limited to, one or more stabilisers, gimbals or the like) adapted to stabilise the drone so that high quality images may be taken even in updraft or downdraft conditions.
[0030] In an alternative embodiment of the invention, the altitude of the drone may be changed when updraft or downdraft conditions are experienced. In this embodiment, it is envisaged that the drone may be moved to a higher or lower altitude in order to avoid the updraft or downdraft conditions. Images may then be captured at the new altitude. In some embodiments, the changing of the altitude of the drone may be automatic. In this embodiment of the invention, one or more sensors associated with the drone may sense the presence of an updraft or downdraft and may adjust the altitude of the drone so as to avoid the updraft or downdraft. Preferably, the one or more sensors may detect that the stability of the drone in the updraft or downdraft falls outside a predetermined level of stability required to produce high quality images. Thus, the altitude of the drone may be altered in response to the stability of the drone detected by the one or more sensors. In this embodiment of the invention, it is envisaged that the change in altitude of the drone may be performed during the surveying of the target.
[0031] In other embodiments of the invention, prior to surveying the target, the drone may be directed to at least one of the plurality of nodes in order to determine the environmental conditions experienced at each node. More preferably, as well as determining the environmental conditions, the altitude at which adverse environmental conditions may be avoided may also be determined. The determination of environmental conditions may be conducted by manually directing the drone to the at least one node, or may be conducted automatically by programming a flight plan for the drone. Once the environmental conditions have been determined, the drone may, during the survey of the target, be directed towards an altitude that avoids the adverse environmental conditions determined at a particular node.
[0032] The method of surveying a target comprises combining the plurality of images to form a composite 360 image about each of the nodes. In a preferred embodiment of the invention, the composite 3600 image may be substantially spherical. The plurality of images may be combined according to any suitable technique, such as image processing, computer vision, and multimedia. For instance, the plurality of images may be combined to form a composite 3600 image by aligning the plurality of images, applying the aligned plurality of images to a compositing surface and stitching the aligned plurality of images together. The plurality of images may be rectified to correct for defects, distortions, or perspective differences.
[0033] The plurality of images may be aligned by aligning one or more nodes of the plurality of nodes, one or more visual markers in the target having known coordinates, one or more visual features in the target (such as a man-made object or structure, a feature in the terrain surface, etc.), or combinations thereof, in adjacent images. Alternatively, the plurality of images may be aligned by using pixel-to-pixel matching to search for image alignments that minimise pixel-to-pixel dissimilarities.
[0034] The aligned plurality of images may be applied to a compositing surface and stitched together to form a composite 3600 image. Any suitable compositing surface may be used. For instance, the compositing surface may be the coordinate space of a reference image, a cylindrical compositing surface, a spherical compositing surface. Preferably however, the compositing surface enables the aligned plurality of images to be formed into a composite 360 image.
[0035] The combining of the aligned plurality of images to create the composite 360° image may be performed automatically using a computer or similar computing device. More preferably, however, the combining of the plurality of images to create the composite 3600 image may be performed manually by a user. Preferably, the manual combination of the plurality of images is performed by a user using a computer or similar computing device. The computer or similar computing device may be the same computer or similar computing device which divides a target into a plurality of nodes or it may be a different computer or computing device.
[0036] The plurality of images may be rectified before and/or after the plurality of images are combined. For instance, the plurality of images may be corrected for optical defects (such as lens distortions, parallax, exposure differences, and the like), the images may be transformed so that the view point of adjacent images are aligned, etc., before the images are combined. In this instance, it is envisaged that the corrections may assist in aligning the plurality of images by providing better quality images. Alternatively, the plurality of images may be rectified after the plurality of images are aligned, for instance, colours may be adjusted to compensate for exposure differences, seam lines between adjacent images may be masked, moving objects masked, etc.
[0037] The plurality of images may be rectified to correct for image quality or perspective differences by any suitable means. For instance, the camera, or the object to which the camera may be mounted, may be adapted to correct for the effects of motion or vibration, by using devices such as gimbals and custom lenses. Alternatively, the camera may be provided with hardware to correct for distortions, such as distortion correction engines, electronic image stabilisation solutions, stabilisation filters and correction grids, and the like. Image correction algorithms may be used for colour correction, colour balancing, contrast correction, blending of overlap areas, aligning the viewpoints of adjacent images, transforming images to correct for altitude differences, and the like
[0038] The method of surveying a target comprises combining each of the composite 3600images to form a Virtual Reality (VR) image representation of the target. The composite 3600 images may be combined together by any suitable method. The combining of the composite 3600 images may be performed automatically using a computer or similar computing device. More preferably, however, the combining of the composite 3600 images may be performed manually by a user. Preferably, the manual combination of the composite 3600 images is performed by a user using a computer or similar computing device. The computer or similar computing device may be the same computer or similar computing device which combines the plurality of images to create the composite 3600 image, or it may be a different computer or computing device.
[0039] The VR image representation may comprise a plurality of composite 360 images, wherein each composite 3600image is located about an individual node of the plurality of nodes such that each composite 360 image does not substantially overlap. The VR image representation may comprise a plurality of composite 360 images, wherein each composite 360° image substantially overlaps with an adjacent composite 360° image, such that adjacent composite 3600 image may be substantially seamlessly integrated.
[0040] In an embodiment of the invention, the VR image representation comprises an interface. Any suitable interface may be used. Preferably however, the interface enables the user to interact with one or more elements in the VR image representation to navigate the target and control their experience with the target. The user may interact with the interface by any suitable means, for instance the user may interact with the interface by clicking or hovering over an interactive object, scrolling through a menu, or any combination thereof. In an embodiment of the invention, a user may move between one or more nodes of the plurality of nodes to view the target from a different location. In an alternative embodiment of the invention, a user may enter a set of coordinates to move to a new location about the target. In a further embodiment of the invention, the user may click or hover over an interactive object to access information about the target or a feature in the target. A user may interact with the interface by adding an interactive object comprising photos, annotations, charts, or the like. In a preferred embodiment of the invention, the VR image representation is a substantially live feed of the survey. In an embodiment of the invention, a user may use the user device to transfer one or more new survey tasks and/or one or more updated survey tasks to the at least one camera of the one or more cameras and/or the object to which the camera is mounted. In an embodiment of the invention, a user may use the user device to transfer one or more new survey tasks or flight plans for a UAV and/or one or more updated survey tasks or flight plans for a UAV to the UAV.
[0041] The method of surveying a target comprises displaying the VR image representation of the target on a user device. Any suitable user device may be used. Preferably, however, the user device may be capable of displaying a VR image representation. For instance, the user device may be a laptop computer, desktop computer, tablet computer, smart phone, head mounted display, or other terminal type. In a preferred embodiment, the user device may be capable of displaying high resolution VR images. However, it will be understood that the quality of the VR image representation may vary depending on a number of factors, such as the screen resolution of the user device, graphics and supporting hardware and inputs into the VR image representation.
[0042] In a preferred embodiment of the invention, the user device may be a head-mounted display. For instance, the head-mounted display may be a virtual reality headset, an augmented reality headset, or the like.
[0043] In an embodiment of the invention, a user may interact with the VR image representation. The user may interact with the VR image representation in any suitable way. For instance, the user may interact with the VR image representation actively, the user may view the VR image representation passively, or the VR image representation may respond to the user's movement, or any suitable combination thereof. The user device may comprise one or more interactive devices which enable a user to interact with the VR image representation. The one or more interactive devices may be located on the user device (such as eye tracking, head tracking, or the like) or may be external to the user device (such as sensors located about the user environment, conventional peripheral devices, haptic controllers, or the like). It is envisaged that an interactive device may provide sensory feedback to the user, adjust the visual output of the user device to the user's point of view, alter the position of the user in the VR image representation, and combinations thereof.
[0044] The head-mounted display and/or the one or more interactive devices may be capable of tracking rotational movement, translational movement and combinations thereof. In an embodiment of the invention, the head-mounted display and/or the one or more interactive devices may have at least three degrees of freedom, more preferably six degrees of freedom.
[0045] A user may transition between composite 360 images within the VR image representation and interface by any suitable means. For instance, the transition type may be teleport, linear or M6bius, or any suitable combination thereof.
[0046] In an embodiment of the invention, a computer or similar computing device may be used to prepare a target for surveying, operate the drone, process the plurality of images obtained by the drone, combine the plurality of images to form a composite 3600 image and combine each of the composite 3600 images to form a VR image representation of the target, In an alternative embodiment of the invention, different computers or computing devices may be used. Alternatively, a network of computers or similar computing devices may be used.
[0047] In a second aspect, the invention resides broadly in a method of surveying a target, the method comprising:
dividing the target into a plurality of nodes;
directing a UAV including one or more cameras to each of the plurality of nodes;
obtaining a plurality of images at each of the plurality of nodes using the one or more cameras;
combining the plurality of images to form a composite 3600 image about each of the nodes;
combining each of the composite 3600 images to form a VR image representation of the target; and
displaying the VR image representation of the target on a user device.
[0048] Preferably, the step of dividing the target into a plurality of nodes is the step of the first aspect of the invention.
[0049] The plurality of images may be obtained by directing an unmanned aerial vehicle (UAV) including one or more cameras to each of the plurality of nodes. The camera may be mounted on any suitable portion of the UAV. Preferably, the one or more cameras may be mounted on a portion of the UAV so as to have an unimpeded view of the target.
[0050] Any suitable UAV may be used. Preferably however, the UAV may be adapted to implement the various systems and methods described substantially herein. For instance, the UAV may be a fixed wing airplane, helicopter, a multi-rotor vehicle (e.g., a quad copter in single propeller and coaxial configurations), a vertical take-off and landing vehicle, lighter than air aircraft, etc. Preferably, however, the UAV is a drone. It will be understood that one of the differences between a fixed wing UAV and a drone is the ability of a drone to hover at a particular point in space. Thus, in this specification, the term "drone" is used to refer to a UAV that is capable of hovering at a point in space. The drone may be a commercially available drone platform or alternatively, the drone may be a commercially available drone platform that has been modified to implement the various systems and methods described substantially herein. Preferably, the drone may be capable of hover flight.
[0051] One or more drones may be used to survey a target. The one or more drones may be assigned one or more nodes of the plurality of nodes. The one or more drones may be assigned different nodes of the plurality of nodes, such that each node of the plurality of nodes may be visited by only one drone. Alternatively, each node of the plurality of nodes may be visited by more than one drone. The one or more drones may include the same type of cameras, or alternatively, the one or more drones may include different cameras.
[0052] In an embodiment of the invention, the drone may be operated autonomously. In a further embodiment of the invention, the drone may operate semi-autonomously. In this embodiment, it is envisaged that the drone may be adapted to be controlled remotely by an operator. The operator may control the automated vehicle using any suitable technique. Preferably, the operator may control the drone remotely. In this embodiment of the invention, it is envisaged that a wireless connection (i.e. a wireless transmitter) may be provided between the operator and the drone. Thus, the operator may be located remotely to the drone. In some embodiments, a remote operator may be provided with an interface (such as a screen or other display device) that allows the operator to view and monitor the operation of the drone. In these embodiments, the remote operator may be capable of intervening in the operation of the drone at any time if, for instance, the drone is in danger of colliding with an object, to provide an updated flight plan or updated coordinate information for the plurality of nodes, or is not operating optimally, is needed for a different task, has completed its task and so on. It is envisaged that the remote operator may be able to switch the drone between being operator controlled and being operated autonomously.
[0053] In some embodiments of the invention, the drone may be operated entirely autonomously. In this embodiment, a control unit of the drone may be provided with one or more predetermined rules relating to the coordinates of the plurality of nodes, one or more features in the target (such as a man-made object or structure, a feature in the terrain surface, etc.) and the task to be performed or the like, or any suitable combination thereof. Thus, it is envisaged that the drone may be operated solely by the control unit of the drone (and the rules contained therein). In this embodiment, it is envisaged that no human operator may be required.
[0054] Data obtained from the drone may be associated with the plurality of images. Any suitable data may be obtained from the drone. For instance, the data may include location or position data (such as data from a GPS position sensor, geospatial attitude sensors, and the like), attitude data (such as data from a gyroscope, an accelerometer, a magnetometer, barometric sensor, and the like), drone speed, wind speed, time of contact with each of the plurality of nodes, time of image capture, or combinations thereof. In a preferred embodiment, the data obtained from the drone may be metadata.
[0055] The drone may contact the computing device at regular or irregular periods of time to transmit images and data associated with the survey. More preferably, however, the drone may be adapted to continuously contact the computing device such that a user of the computing device is receiving a substantially live feed of the survey. Alternatively, the computing device may contact the drone to trigger the drone to transmit images and data associated with the survey. In a yet further embodiment, the images and data associated with the survey may be physically transferred when the drone returns to its launch site. In an embodiment of the invention, the one or more pieces of data may be transferred to a computing device and wherein based on the data received, the computing device and/or a user of the computing device may transfer one or more new survey tasks or flight plans for a drone and/or one or more updated survey tasks or flight plans for a drone to the drone.
[0056] Preferably, the step of obtaining a plurality of images at each of the plurality of nodes using the one or more cameras is the step of the first aspect of the invention.
[0057] Preferably, the step of combining the plurality of images to form a composite 360 image about each of the nodes is the step of the first aspect of the invention.
[0058] Preferably, the step of combining each of the composite 360° images to form a VR image representation of the target is the step of the first aspect of the invention.
[0059] Preferably, the step of displaying the VR image representation of the target on a user device is the step of the first aspect of the invention.
[0060] The present invention provides a number of advantages over the prior art. For instance, the present invention allows a more realistic view of the target into a location remote therefrom (such as an office or the like) and overcomes the need to have people located at the target doing physical surveys, thus eliminating the need for safety inductions and improving safety in general. Further, this may avoid the cost and time associated with people needing to travel to a remote location. In addition, the present invention provides high resolution images of the target allowing for more accurate mapping and surveying, identification of safety hazards or maintenance requirements. The high resolution images may also assist in the more accurate identification of features within the target, such as flora and fauna, the contours of the terrain, natural structures or the like.
[0061] Any of the features described herein can be combined in any combination with any one or more of the other features described herein within the scope of the invention.
[0062] The reference to any prior art in this specification is not, and should not be taken as an acknowledgement or any form of suggestion that the prior art forms part of the common general knowledge.
BRIEF DESCRIPTION OF DRAWINGS
[0063] Preferred features, embodiments and variations of the invention may be discerned from the following Detailed Description which provides sufficient information for those skilled in the art to perform the invention. The Detailed Description is not to be regarded as limiting the scope of the preceding Summary of the Invention in any way. The Detailed Description will make reference to a number of drawings as follows:
[0064] Figure 1 illustrates a two-dimensional map of a target illustrative of the prior art;
[0065] Figure 2 illustrates a representation of a method of surveying a target according to an embodiment of the invention;
[0066] Figure 3 illustrates a view of a target showing the aligned plurality of images applied to a compositing surface; and
[0067] Figure 4 illustrates a screenshot from the VR image representation according to an embodiment of invention.
DESCRIPTION OF EMBODIMENTS
[0068] In Figure 1, a two-dimensional map of a terrain surface 100 illustrative of the prior art is illustrated. It will be noted that conventional two-dimensional images such as that illustrated in Figure 1 show limited detail and/or low resolution, and are largely unsuitable for identifying safety hazards, maintenance requirements, terrain contours, or for conducting surveys of flora and fauna types and populations.
[0069] In Figure 2, a representation of a method of surveying a target 200 according to an embodiment of the invention is illustrated. A mine site 10 to be surveyed is divided into a plurality of nodes 12 such that the plurality of nodes 12 define a multi-dimensional space about the mine site 10. A drone 14 including one or more cameras 16 (such as an RGB camera) and a GPS receiver (not shown) is directed, according to a predetermined flight plan, to one or more nodes of the plurality of nodes 12. At least a portion of the plurality of nodes 12 represent a point in space at which the one or more cameras 16 of the drone 14 take a plurality of images of the mine site 10. The drone obtains a plurality of images from each node of the plurality of nodes 12, wherein each of the plurality of images is associated with one or more pieces of data obtained from the GPS receiver of the drone 14 and/or the plurality of nodes 12.
[0070] The drone 14 transmits the plurality of images and one or more pieces of data obtained from the GPS receiver and/or the plurality of nodes 12 to a computing device 18 comprising software for processing. It is envisaged that, in use, the drone 14 may transmit images and data continuously to a computing device 18. Thus, the drone 14 may also include a transmitter, such as a wireless transmitter (not shown). A user may interact with the drone 14 where necessary to alter the flight plan of the drone, adjust the altitude of the drone or provide the drone with new tasks.
[0071] A user may use the computing device 18 to combine the plurality of images and one or more pieces of data to form a composite 360 image of the target 300. Each composite 360° image of the target 300 may be combined with an interface to form a VR image representation of the target 400. A user may interact with the VR image representation of the target 400 displayed on a user device such as a VR headset.
[0072] In Figure 3 a view of a target showing the aligned plurality of images applied to a compositing surface is illustrated. Computing device 18 combines the plurality of images to by aligning the plurality of images, applying the aligned plurality of images 32 to a compositing surface 30 and stitching the aligned plurality of images together.
[0073] In Figure 4 a screenshot from the VR image representation 400 according to an embodiment of invention is illustrated. A VR image representation 400 is displayed on a user device such as a VR headset (not shown). A user (not shown) can interact with the VR image representation 400 by clicking on an interactive object to move to a new location, selecting a menu option 42 or clicking on an object such as a photo, associated with the interface 44.
[0074] In the present specification and claims (if any), the word 'comprising' and its derivatives including 'comprises' and 'comprise' include each of the stated integers but does not exclude the inclusion of one or more further integers.
[0075] Reference throughout this specification to 'one embodiment' or'an embodiment' means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases'in one embodiment' or'in an embodiment' in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more combinations.
[0076] In compliance with the statute, the invention has been described in language more or less specific to structural or methodical features. It is to be understood that the invention is not limited to specific features shown or described since the means herein described comprises preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims (if any) appropriately interpreted by those skilled in the art.

Claims (16)

1. A method for surveying a target, the method comprising:
dividing the target into a plurality of nodes;
obtaining a plurality of images at each of the plurality of nodes using one or more cameras;
combining the plurality of images to form a composite 3600 image about each of the nodes;
combining each of the composite 360° images to form a VR image representation of the target; and
displaying the VR image representation of the target on a user device.
2. A method for surveying a target, the method comprising:
dividing the target into a plurality of nodes;
directing a UAV including one or more cameras to each of the plurality of nodes;
obtaining a plurality of images at each of the plurality of nodes using the one or more cameras;
combining the plurality of images to form a composite 360° image about each of the nodes;
combining each of the composite 3600 images to form a VR image representation of the target; and
displaying the VR image representation of the target on a user device.
3. A method for surveying a target according to claim 1 or claim 2, further comprising: obtaining one or more pieces of data from at least one node of the plurality of nodes and/or at least one camera of the one or more cameras and/or an object to which the camera is mounted.
4. A method for surveying a target according to claim 3, wherein the one or more pieces of data is associated with the plurality of images obtained at the at least one node and/or by the at least one camera and/or the object to which the camera is mounted.
5. A method for surveying a target according to claim 3 or claim 4, wherein the one or more pieces of data is transferred to a computing device and wherein based on the data received, the computing device and/or a user of the computing device transfers one or more new survey tasks and/or one or more updated survey tasks to the at least one camera of the one or more cameras and/or the object to which the camera is mounted.
6. A method for surveying a target according to claim 3 or claim 4, wherein the one or more pieces of data is transferred to a computing device and wherein based on the data received, the computing device and/or a user of the computing device transfers one or more new survey tasks or flight plans for a UAV and/or one or more updated survey tasks or flight plans for a UAV to the UAV.
7. A method for surveying a target according to any one of the preceding claims, wherein each image of the plurality of images at least partially overlaps with one or more adjacent images.
8. A method for surveying a target according to any one of the preceding claims, wherein each image of the plurality of images has at least about 30% overlap with one or more adjacent images.
9. A method for surveying a target according to any one of the preceding claims, wherein the plurality of images is combined by aligning the plurality of images and applying the aligned plurality of images to a compositing surface to form a composite 3600 image.
10. A method for surveying a target according to any one of the preceding claims, wherein the plurality of images is rectified before combining the plurality of images.
11. A method for surveying a target according to any one of the preceding claims, wherein the VR image representation comprises an interface, wherein the interface enables a user to interact with one or more elements in the VR image representation.
12. A method for surveying a target according to claim 11, wherein a user of the user device may interact with the interface to move between one or more of the plurality of nodes and/or access one or more pieces of information associated with the node, the one or more cameras and/or an object to which the camera is mounted, one or more of the plurality of images, or the composite 360 image.
13. A method for surveying a target according to any one of the preceding claims, wherein the VR image representation is a substantially live feed of the survey.
14. A method for surveying a target according to any one of the preceding claims, further comprising:
transferring using the user device one or more new survey tasks and/or one or more updated survey tasks to the at least one camera of the one or more cameras and/or the object to which the camera is mounted.
15. A method for surveying a target according to any one of the preceding claims, further comprising:
transferring using the user device one or more new survey tasks or flight plans for a UAV and/or one or more updated survey tasks or flight plans for a UAV to the UAV.
16. A method for surveying a target according to any one of the preceding claims, wherein one or more UAV are used to survey the target.
AU2020217371A 2019-08-13 2020-08-12 A method of surveying a target Pending AU2020217371A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019902920A AU2019902920A0 (en) 2019-08-13 A method of surveying a target
AU2019902920 2019-08-13

Publications (1)

Publication Number Publication Date
AU2020217371A1 true AU2020217371A1 (en) 2021-03-04

Family

ID=74716010

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020217371A Pending AU2020217371A1 (en) 2019-08-13 2020-08-12 A method of surveying a target

Country Status (1)

Country Link
AU (1) AU2020217371A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022241680A1 (en) * 2021-05-19 2022-11-24 深圳市大疆创新科技有限公司 Method and apparatus for dividing operation area, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022241680A1 (en) * 2021-05-19 2022-11-24 深圳市大疆创新科技有限公司 Method and apparatus for dividing operation area, and storage medium

Similar Documents

Publication Publication Date Title
US20210072745A1 (en) Systems and methods for uav flight control
US11794890B2 (en) Unmanned aerial vehicle inspection system
US11635775B2 (en) Systems and methods for UAV interactive instructions and control
US20210358315A1 (en) Unmanned aerial vehicle visual point cloud navigation
US12007761B2 (en) Unmanned aerial vehicle inspection system
US11361665B2 (en) Unmanned aerial vehicle privacy controls
KR102001728B1 (en) Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone
EP3783454B1 (en) Systems and methods for adjusting uav trajectory
US9513635B1 (en) Unmanned aerial vehicle inspection system
US11644839B2 (en) Systems and methods for generating a real-time map using a movable object
Thurrowgood et al. A biologically inspired, vision‐based guidance system for automatic landing of a fixed‐wing aircraft
JP6138326B1 (en) MOBILE BODY, MOBILE BODY CONTROL METHOD, PROGRAM FOR CONTROLLING MOBILE BODY, CONTROL SYSTEM, AND INFORMATION PROCESSING DEVICE
CN105391988A (en) Multi-view unmanned aerial vehicle and multi-view display method thereof
WO2017139282A1 (en) Unmanned aerial vehicle privacy controls
AU2020217371A1 (en) A method of surveying a target
WO2020225979A1 (en) Information processing device, information processing method, program, and information processing system
US10424105B2 (en) Efficient airborne oblique image collection
CN111665870A (en) Trajectory tracking method and unmanned aerial vehicle
KR102467855B1 (en) A method for setting an autonomous navigation map, a method for an unmanned aerial vehicle to fly autonomously based on an autonomous navigation map, and a system for implementing the same
US11409280B1 (en) Apparatus, method and software for assisting human operator in flying drone using remote controller
CN112154389A (en) Terminal device and data processing method thereof, unmanned aerial vehicle and control method thereof
Sumetheeprasit Flexible Configuration Stereo Vision using Aerial Robots
Lee Military Application of Aerial Photogrammetry Mapping Assisted by Small Unmanned Air Vehicles
WO2022175385A1 (en) Apparatus, method and software for assisting human operator in flying drone using remote controller
Ekpa et al. Application of UAV Technology in Mapping Part of University of Uyo, Akwa Ibom, Nigeria