CA2912432C - Mapping of mining excavations - Google Patents

Mapping of mining excavations Download PDF

Info

Publication number
CA2912432C
CA2912432C CA2912432A CA2912432A CA2912432C CA 2912432 C CA2912432 C CA 2912432C CA 2912432 A CA2912432 A CA 2912432A CA 2912432 A CA2912432 A CA 2912432A CA 2912432 C CA2912432 C CA 2912432C
Authority
CA
Canada
Prior art keywords
camera
vehicle
digital
data processor
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2912432A
Other languages
French (fr)
Other versions
CA2912432A1 (en
Inventor
Roderick Mark STEELE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TESMAN Inc
Original Assignee
TESMAN Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TESMAN Inc filed Critical TESMAN Inc
Publication of CA2912432A1 publication Critical patent/CA2912432A1/en
Application granted granted Critical
Publication of CA2912432C publication Critical patent/CA2912432C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • E02F9/261Surveying the work-site to be treated
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21FSAFETY DEVICES, TRANSPORT, FILLING-UP, RESCUE, VENTILATION, OR DRAINING IN OR OF MINES OR TUNNELS
    • E21F13/00Transport specially adapted to underground conditions
    • E21F13/02Transport of mined mineral in galleries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Mining & Mineral Resources (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geochemistry & Mineralogy (AREA)
  • Geology (AREA)
  • Image Analysis (AREA)

Abstract

An apparatus for installation on a vehicle suitable for mining excavation including at least a first and second camera configured to capture digital images of at least a portion of the mining excavation; a data processor in communication with the first camera and the second camera; the data processor being operative for generating a digital three dimensional (3d) representation of the portion of the mining excavation. The apparatus further controlling at least one operation of the vehicle.

Description

MAPPING OF MINING EXCAVATIONS
TECHNICAL FIELD
[0001] The disclosure relates generally to underground mining operations, and more particularly to mapping of mining excavations.
BACKGROUND OF THE ART
[0002] The creation of photo-realistic three-dimensional (3D) models of observed scenes has been an active research topic for years. Such 3D models can be useful for both visualization and measurements in various applications.
Existing methods typically require specialized equipment including high-resolution cameras, camera mounts and customized lighting that must be deployed and used on site by trained personnel. Existing methods can also require significant computing time and power. Accordingly, existing methods used to create such models are typically conducted under controlled environmental conditions (e.g., lighting) and can be relatively difficult and expensive to conduct in underground environments.
[0003] Improvement is therefore desirable.
SUMMARY
[0004] The disclosure describes apparatus and methods for mapping mining excavations. In some examples, the apparatus disclosed herein may be suitable for installation on a vehicle and some of the methods disclosed herein may be conducted onboard such vehicle. For example, the apparatus and methods disclosed herein may be suitable for generating three-dimensional (3D) digital representations of mining excavations (including tunnels) and may be integrated in mining vehicles including those suitable for underground operations such as drilling machines (e.g., jumbo drills).
[0005] In one aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus may be useful for mapping a mining excavation and also controlling at least one operation of the vehicle. The apparatus comprises:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;

a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of the digital images captured by the first camera and the second camera;
generate signals representative of a digital 3D representation of the portion of the mining excavation based on the captured digital images; and generate signals useful in the operation of the vehicle based on at least one of the captured digital images.
[0006] In another aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus may be useful for mapping a mining excavation.
The apparatus comprises:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion, at least one of the first field of view and the second field of view being configured to include a portion of the vehicle; and a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of the digital images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.
[0007] In another aspect, the disclosure describes an apparatus for installation on a vehicle where the apparatus is useful for mapping a mining excavation.
The apparatus comprises:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of
8 view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of low-resolution digital images captured by the first camera and the second camera;
generate signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images;
receive signals representative of a high-resolution image captured by at least one of the first camera, the second camera and a third camera, the high-resolution image being of the common portion the first field of view and the second field of view; and transform the high-resolution digital image according to the 3D mesh.
[0008] In another aspect, the disclosure describes a vehicle for conducting drilling in an underground environment. The vehicle comprises:
a drilling implement;
a first camera and a second camera configured to capture digital images of at least a portion of the underground environment, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the underground environment based on the captured digital images.
[0009] In another aspect, the disclosure describes a method for mapping a mining excavation and also controlling at least one operation of a vehicle.
The method may be performed by a data processor and comprises:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation;
generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images;
and generating signals useful in the at least one operation of the vehicle based on signals representative of at least one of the digital images.
[0010] In another aspect, the disclosure describes a method for mapping a mining excavation. The method may be performed by a data processor mounted to a vehicle. The method comprises:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of the vehicle; and generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle.
[0011] In another aspect, the disclosure describes a method for mapping a mining excavation. The method may be performed by a data processor mounted to a vehicle. The method comprises:
receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation;
generating signals representative of a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images;
receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and transforming the high-resolution digital image according to the 3D mesh.
[0012] In another aspect, the disclosure describes vehicles including drilling machines comprising apparatus disclosed herein. In a further aspect, the disclosure describes such vehicles onboard which methods disclosed herein may be conducted.
[0013] Further details of these and other aspects of the subject matter of this application will be apparent from the detailed description and drawings included below.
DESCRIPTION OF THE DRAWINGS
Reference is now made to the accompanying drawings, in which:
[0014] FIG. 1 shows a schematic representation of an apparatus for mapping of a mining excavation according to one embodiment;
[0015] FIG. 2 shows a schematic representation of the apparatus of FIG.

incorporated in a vehicle;
[0016] FIG. 3 shows a more detailed schematic representation of the apparatus of FIG. 1;
[0017] FIG. 4 shows a schematic side elevation view of a vehicle to which the apparatus of FIG. 1 may be mounted;
[0018] FIG. 5 shows a photograph of the vehicle of FIG. 4;
[0019] FIG. 6 show a linear image taken from a camera of the apparatus of FIG. 1;
[0020] FIG. 7 shows a visual representation of a 3D mesh generated using the apparatus of FIG. 1;
[0021] FIG. 8 shows a visual representation of a transformed image generated using the linear image of FIG. 6, the 3D mesh of FIG. 7 and the apparatus of FIG. 1;
[0022] FIG. 9 shows a flow chart illustrating a method for mapping mining excavations;
[0023] FIG. 10 shows a flow chart illustrating a method for associating camera calibration parameters to a linear image;
[0024] FIG. 11 shows a flow chart illustrating a method for deskewing a linear image based on camera calibration parameters;
[0025] FIG. 12 shows a flow chart illustrating a method for applying a Gaussian blur to an image;
[0026] FIG. 13 shows a flow chart illustrating a method for producing a merged disparity image;
[0027] FIG. 14 shows a flow chart illustrating a method for producing a mesh and transforming an image based on the 3D mesh;
[0028] FIG. 15 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation and generating signals useful in the operation of a vehicle;
[0029] FIG. 16 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation based on digital images and excluding a portion of a vehicle captured in the digital images; and
[0030] FIG. 17 shows a flow chart illustrating a method for generating a digital 3D representation of a mining excavation based on low-resolution and high-resolution digital images.
DETAILED DESCRIPTION
[0031] Aspects of various embodiments are described through reference to the drawings.
[0032] Although terms such as "maximize", "minimize" and "optimize" may be used in the present disclosure, it should be understood that such term may be used to refer to improvements, tuning and refinements which may not be strictly limited to maximal, minimal or optimal.
[0033] In some example embodiments, the present disclosure describes apparatus and methods for image/motion capture in an underground environment, or other harsh environments, such as where there may be poor lighting, high vibrations, dust and mud, limited power, limited space and rough handling of equipment. In particular, the disclosed apparatus and methods may involve the extrapolation from video, still images or other electronic representations of data regarding the position of one or more subjects in the images. The image capture of subjects from two or more known vantage points, this data may be extrapolated into three-dimensional (3D) data (e.g., x, y and z co-ordinates).
[0034] FIG. 1 is a schematic representation of an exemplary apparatus 10 that may be used for mapping of mining excavations such as underground environments including tunnels, for example. As explained further below, apparatus 10 may also be useful in controlling at least one operation of a vehicle to which apparatus 10 may be mounted.
[0035] Apparatus 10 may comprise one or more digital cameras 12 and data processing device(s) 14. For example, two or more cameras 12 (including multiple pairs of cameras 12) may be required so that two or more digital images of a portion of a mining excavation to be mapped may be acquired from different locations (vantage points) and stereo matching may be performed (e.g., stereophotogrammetry). Digital camera(s) 12 and data processing device(s) 14 may be coupled to permit digital images captured by camera(s) 12 to be received, stored and/or processed by data processing device(s) 14 in accordance with methods described herein. Digital camera(s) 12 and data processing device(s) 14 may also be configured to provide a live view of the portion of mining excavation.
Apparatus may be configured to generate output(s) 16 which may, for example, be useful in generating digital 3D representations of existing mining excavations (e.g., tunnels) for mining operations. For example, apparatus 10 may be useful in generating three-dimensional (3D) geometric models of underground tunnels and/or 3D
transformed images (e.g., 3D textured maps) useful in geological exploration and monitoring.
[0036] FIG. 2 shows that apparatus 10 may be mounted to or incorporated in a stationary or mobile piece of equipment such as, for example, vehicle 18.
Vehicle 18 may be suitable for traveling in a tunnel of a mine and may be configured to perform one or more mining-related task such as drilling. For example, vehicle 18 may comprise one or more drilling or other type(s) of implements related to mining operations. Vehicle 18 may be configured for use in vertical and/or horizontal excavation and/or tunnels under shaft sinking galloways for example.
[0037] Data processing device(s) 14 may, for example, include a relatively low-power, portable and low footprint computer such as a MacTM Mini. The use of a low-power and portable system may be suitable for an underground environment, because of the limited space and power available. The use of a low-power and portable system may be also be suitable for incorporation into vehicle 18.
Other conventional or other types of data processing device(s) 14 may also be suitable.
[0038] Apparatus 10 may comprise one or more input devices 17 such as a keyboard, mouse, touchpad, touch screen, switches, buttons and/or other type of input device(s) suitable for permitting data processing device(s) 14 to receive input from an operator. Apparatus 10 may comprise one or more display(s) 19 for displaying a graphic user interface with responsive objects for receiving input from an operator of apparatus 10. Display(s) 19 may also display information about the status/operation of apparatus 10 and/or the status/operation of vehicle 18.
For example, display(s) 19 may comprise a touch screen for receiving input from the operator. The graphic user interface shown on display(s) 19 may be used to start and/or control one or more operations of apparatus 10 and/or one or more operations of vehicle 18. For example, the graphic user interface may be used to set appropriate settings for camera(s) 12 such as, for example, exposure settings, shutter timing(s), gain(s) and alignment settings.
[0039] Camera(s) 12 may comprise relatively low-power YUV (i.e., black and white) or color (RGB) digital cameras. Camera(s) 12 may have a relatively low pixel density and small size (e.g., about 1 cubic inch) however it should be understood that other types of camera(s) 12 may be suitable. Camera(s) 12 may have relatively low pixel resolution, as a trade-off for lower processing times. For example, camera(s) 12 may have a resolution of 640x480 pixels or lower, and may have a power consumption of about 2W at 12VDC. For example, camera(s) 12 may include one or more Bonsai TM FireiTM digital cameras sold under the trade name UnibrainTM. Other suitable cameras may be used, and the power consumption and pixel resolution of the camera(s) 12 may be different for different applications and requirements. For example, the resolution described above may be suitable for motion capture and photography of a subject at a range of up to 30 feet.
Higher resolutions, such as up to 2448x2048 may be used for motion capture and/or still photography of a subject at a farther range, such as up to 140 feet. For example, camera(s) 12 may be configure to capture relatively low-resolution images (e.g., 640x480 pixels or lower) and/or high-resolution images (e.g., 1024x600 pixels or higher). Alternatively, one or more of camera(s) 12 may be configured to capture low-resolution digital images and one or more of camera(s) 12 may be configured to capture digital images of higher resolution. Outputs 16 may be stored within data processing device(s) 14 onboard vehicle 18 and/or exported from vehicle 18.
[0040] FIG. 3 shows a more detailed schematic representation of apparatus 10.
For example, data processing device 14 may comprise one or more data processors 20. Data processor 20 may comprise one or more digital computer(s) or other data processors. Data processing device(s) 14 may also comprise memory(ies) 22 and memory data devices or register(s) 24. Memory(ies) 22 may comprise any storage means (e.g. devices) suitable for retrievably storing machine-readable instructions executable by processor(s) 20. Memory(ies) 22 may be non-volatile. For example, memory(ies) 22 may include erasable programmable read only memory (EPROM) and/or flash memory. Such machine-readable instructions may cause processor(s) 20 to: receive signals 26 representative of digital images captured by camera(s) 12;
generate signals 16a representative of a digital 3D representation of the portion of a mining excavation based on the captured digital images; and generate signals 16b useful in the operation of vehicle 18 based on at least one of the captured digital images. Memory(ies) 22 may also comprise any data storage devices suitable for storing data received and/or generated by processor(s) 20, preferably retrievably.
For example, memory(ies) 22 may comprise one or more of any or all of erasable programmable read only memory(ies) (EPROM), flash memory(ies) or other electromagnetic media suitable for storing electronic data signals in volatile or non-volatile, non-transient form.
[0041] Data processing device(s) 14 may be configured to perform two or more functions. For example, while data processing device(s) 14 may be configured to:
(1) generate signals 16a representative of a digital 3D representation of the portion of a mining excavation based on the captured digital images; and (2) generate signals 16b useful in the operation of vehicle 18 based on at least one of the captured digital images, the generation of signals 16a and 16b may be conducted simultaneously or individually (i.e., separately). For example, in order to reduce the amount of processing time required from data processing device(s) 14, it may be desired to generate signals 16a and 16b individually instead of simultaneously. For example, apparatus 10 may be configured to receive input from an operator that is indicative of which of signals 16a and 16b are to be generated at a particular time.
For example, such input may be provided by the operator via display(s) 19 or any other suitable input device(s) 17 that may be coupled to data processing device(s) 14. The reduced processing time required from data processing device(s) 14 may facilitate the integration of data processing device(s) 14 on vehicle 18 and may permit the generation of signals 16a and 16b onboard of vehicle 18 and substantially in real-time. Accordingly, outputs representative of signals 16a and/or 16b may be presented to an operator of vehicle 18 via display(s) 19.
[0042] FIG. 4 shows a schematic side elevation view of an exemplary vehicle 18 to which apparatus 10 may be mounted. Vehicle 18 may be configured to conduct specific mining-related operations within a mining excavation and accordingly may comprise one or more implements (e.g., tools) for conducting such operations. For example, vehicle 18 may be configured to conduct drilling and may comprise one or more movable drill booms 28 comprising respective drills that may extend in front of and/or hang below vehicle 18. Such vehicle 18 may also be referred to as a mobile drilling machine also known as a "jumbo drill".
Boom(s) 28 may be maneuvered to position and orient the drills for creating blast holes in the rock in a tunnel or other type of mining excavation. The positioning and orientation of booms 28 may directly affect the accuracy of the positioning of the blast holes.
Most conventional jumbo drills typically rely on the judgment of the operator to position/orient boom(s) 28 and drill the blast holes based on visual inspection of boom(s) 28 from cab 30 of vehicle 18. This may result in inconsistent placement and orientation of blast holes. As explained further below, apparatus 10 may assist an operator of vehicle 18 in the movement and positioning of boom(s) 28. For example, apparatus 10 may be useful in assisting an operator of vehicle 18 in accordance with the teachings of PCT application No. PCT/CA2011/001105, filed September 30, 2011 and titled SYSTEMS AND METHODS FOR MOTION
CAPTURE IN AN UNDERGROUND ENVIRONMENT.
[0043] Vehicle 18 may comprise one or more housings 32 inside which data processing device(s) 14 and power source(s) 34 may be housed. Power source(s) 34 may serve to power data processing device(s) 14, camera(s) 12 and/or display(s) 19. Camera(s) 12 may be mounted to a front side of vehicle 18 and may be positioned and configured such that at least a portion of vehicle 18, such as drilling boom(s) 28 for example, may be within the field(s) of view of camera(s) 12. In case where multiple cameras 12 are used, the fields of view of two or more of such cameras 12 may have a common portion which may be used for stereo matching.
Camera(s) 12 may each have a wide-angle lens providing a field of view of, for example, 180 degrees and may provide wide coverage and adequate view of light reflections. For example, a common portion of a mining excavation and/or a common portion of boom(s) 28 may be within the field of view of a plurality of cameras 12. Camera(s) 12 may be disposed in suitable camera housing(s) 35 in order to protect camera(s) from hazards such as falling rock. In other examples, camera(s) 12 themselves may be relatively robust and resistant to damage, and camera housing(s) 35 may not be necessary. Display(s) 19 may be disposed so as to be visible to an operator inside cab 30. One or more light targets 36 may be disposed on boom(s) 28 and may be used to track the position/movement of boom(s) 28 by apparatus 10.
[0044] FIG. 5 shows a photograph of a front side of the jumbo drill (vehicle 18) of FIG. 3, in which apparatus 10 may be integrated. The front side of vehicle 18, shows that, for example, three (or more) cameras 12 may be disposed and oriented towards a portion of mining excavation (e.g., tunnel) ahead of vehicle 18.
Alternatively, cameras 12 may be positioned on one or more other sides of vehicle 18 to provide visibility of portions of the mining excavation in various directions relative to vehicle 18. As shown, cameras 12 may be positioned along a leading edge of cab 30 of vehicle 18, just under the roof. In other configurations, for example depending on the layout of cab 30, cameras 12 may be positioned at other locations on vehicle 18 such as, for example, above or in front of cab 30.
Cameras 12 may also be positioned at other suitable locations on vehicle 18. Cameras may communicate with data processing device(s) 14 through wired or wireless communications. The cameras 12 may have at least partially overlapping fields of view. One or more lights 37 may be used to illuminate the portion of mining excavation to be mapped during the acquisition of images. The lights may comprise one or more lights 37 provided on vehicle 18, one or more lights provided inside mining cavity and/or ambient lights.
[0045] Lights 37 may comprise standard lights (e.g., headlights, work lights) that are typically (i.e., by default) provided on vehicle 18. In some examples, apparatus 10 may not require any additional custom/special lighting for operation.
For example, lights 37 and camera(s) 12 may generally face the same direction and the illumination provided by lights 37 may in some applications be sufficient for the operation of apparatus 10. In other examples additional lighting may be used to supplement lights 37 if required or desired.
[0046] During operation, apparatus 10 may be used for the generation of signals that may be useful in the operation of at least one aspect of vehicle 18. For example, apparatus 10 may be useful in control of the movement and position/orientation of drill boom(s) 28 of vehicle 18 shown in FIG. 4.
Apparatus 10 may be used to track movement(s) of drill boom(s) 28 via light targets 36. Two or more cameras 12 may capture digital images including light targets 36 and display(s) 19 may provide feedback to an operator of vehicle 18. For example, two or more cameras 12 may capture digital images of targets 36 from different positions (i.e., vantage points) and stereo matching may then be conducted by data processing device(s) 14 to determine the position of targets 36 and thereby determine the position and orientation of drill boom(s) 28.
[0047] An exemplary method for assisting an operator in the operation of at least one aspect of vehicle 18 may include: capturing one or more digital images from at least two cameras 12 where the cameras 12 are positioned at different reference locations to capture image data of at least two light targets 36;
determining any spots corresponding to light targets 36 in the digital images by blurring the image data to remove background noise and applying a set of criteria based on predetermined characteristics of light targets 36; calculating three-dimensional (3D) locations of the light targets 36 using stereo matching; and providing feedback to an operator relating to the position of light targets 36 (and consequently drill boom(s) 28).
[0048] Apparatus 10 may also be used for the generation of a digital 3D
representation (e.g., digital map) of at least a portion of a mining excavation such as an underground tunnel. As mentioned above, apparatus 10 may comprise at least one pair of cameras 12 positioned at known positions relative to each other and each having a field of view that is at least partially common (i.e., at least partially overlapping) so that the pair of cameras 12 can be used to acquire images that can be used as a basis for stereo matching. For example, as shown in FIG. 5, apparatus may comprise three cameras 12. The use of more than two cameras 12 may provide redundancy in avoiding blind spots. For example, any two of the three cameras 12 may be used as a pair to acquire stereo images. Accordingly, for a total of three cameras 12 (e.g., left, right and center), three separate pairs of cameras 12 may be available. A blind spot may, for example, include any portion of vehicle 18 or any other object that does not form part of the mining cavity to be mapped.
For example, boom(s) 28 may be disposed within the field of view of one or more cameras 12 and obstruct the view of the mining excavation by one or more of cameras 12. Accordingly, depending on the position of boom(s) 28, different combination of cameras 12 may be used to acquire stereo images suitable for mapping the tunnel ahead of vehicle 18 and with minimal obstruction. A pair of cameras 12 may be selected to minimize obstruction from boom(s) 22.
Alternatively, one or more additional pairs of cameras 12 may be used to capture portions of the mining cavity that may have been obstructed when photographed using a first pair of cameras 12. Accordingly, different pairs of cameras from different vantage points may be used to capture digital images of portions of the mining cavity and reduce blind spots caused by obstructions.
[0049] One or more pairs of cameras 12 may be used to capture digital images of the mining excavation (e.g. underground tunnel) in stereo and apparatus 10 may used such images to generate a 3D mesh that may be useful in the geometric modeling of the mining excavation. In addition or alternatively, apparatus 10 may be used to generate digital images that have been transformed according to such mesh to provide a 3D textured map and assist geological exploration and monitoring and geotechnical ground support design.
[0050] In one example, a first camera 12 and a second camera 12 may be configured to capture digital images of at least a portion of the mining excavation.
First camera 12 may have a first field of view and second camera 12 may have a second field of view. The first field of view and the second field of view may have a common portion. At least one of the first field of view and the second field of view may be configured to include a portion of vehicle 18 such as boom(s) 28. Data processor(s) 20 may be in communication with first camera 12 and second camera 12. Data processor(s) 20 may be responsive to machine-readable instructions causing data processor(s) 20 to: (1) receive signals 26 representative of digital images captured by the first camera 12 and the second camera 12; and (2) generate signals 16a representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.
[0051] FIG. 6 shows an example of a linear (e.g., 2D) digital image 38 taken using one of cameras 12 on vehicle 18. Linear image 38 shows a portion of a tunnel ahead of vehicle 18. Boom(s) 28 and/or other portion(s) of vehicle 18 may also be visible in linear image(s) 38.
[0052] FIG. 7 shows an example of 3D mesh(es) 40 (e.g., depth map) of the portion of tunnel shown in linear image(s) 38. 3D mesh(es) 26 may be generated based on 3D information extracted based on at least two linear images 38 of the same portion of tunnel taken from two different cameras 12 at different locations (i.e. from different vantage points) according to the methods described below.

mesh(es) 40 may be in a format (e.g. DXF) suitable of importing into computer aided design (CAD) system and suitable for geometric modeling of the portion(s) of tunnel.
[0053] FIG 8. shows an example of a transformed image 42 (e.g., 3D
textured map) generated based on 3D information extracted based on at least two linear images 38 taken in stereo. Transformed image(s) 42 may comprise a re-positioning of individual pixels of linear image(s) 38 based on the 3D information extracted from the two linear images 38 taken in stereo. Specifically, pixels of linear images(s) 38 may have been repositioned at their respective 3D positions in a digital 3D
environment to produce transformed images(s) 42 also known as texturalised images. Accordingly, transformed image(s) 42 may comprise features shown in linear image(s) 38 positioned in a 3D environment that is representative of their actual positions inside the portion of tunnel. Such transformed image(s) 42 may be useful for geological exploration and monitoring and geotechnical ground support design. For example, based on the colors, shades, differences in brightness, and/or other features of transformed image(s) 42, the ore (e.g., geological structure) visible on the internal surface of the portion of tunnel mapped may also be visible at corresponding 3D locations in the transformed image(s) 42 and similiarly rock structures existing on the internal surface of the portion of the tunnel mapped may be also visible at corresponding 3D locations in the transformed images(s) 42.
[0054] FIG. 9 is a flowchart illustrating exemplary method(s) 800 that may be performed using apparatus 10 to generate a digital 3D representation of a portion of mining excavation. For example, method 800 may comprise: acquiring at least one digital linear image 38 of the portion of mining excavation to be mapped from at least two cameras 12 at different locations (i.e. taken in stereo from different vantage points) on vehicle 18 (see block 802); based on the two linear images 38, extracting 3D information of the portion of mining excavation using stereo matching (see block 804); based on the 3D information, generating 3D mesh(es) 40 representative of the geometry(ies) of the portion of mining excavation shown in the linear images 38 (see block 806); and repositioning the pixels of at least one of the linear image(s) 38 according to the 3D information and/or the 3D mesh(es) 40 (e.g.
projecting the pixels of the linear image(s) 38 onto 3D mesh(es) 40) to produce transformed image(s) 42 (see block 808). The linear images 38 used may be of the same (e.g., low or high) resolution. Alternatively, linear images 38 used to generate 3D mesh(es) 40 may be of low-resolution and linear image(s) 38 used in block may be different linear image(s) 38 and may be of higher resolution. In the interest of reducing computing time required from data processing device(s) 14, it may be desired in some applications to generate 3D mesh(es) 40 using image(s) 38 of lower resolution.
[0055] Depending on the types of cameras 12 and geometry of the mining excavation, apparatus 10 may be used to map a portion of mining excavation of up to about 140 feet ahead of vehicle 18. Different portions of mining excavation may be mapped separately and sequentially as vehicle 18 advances through the mining excavation and the separately acquired maps may later be assembled in software designed for viewing and editing the generated 3D sections to produce a map of the entire mining excavation or a least of a larger portion of mining excavation that is of interest. A suitable viewer may be used to allow a user to digitally navigate through the mining excavation digitally and view the inside of the mining excavation in any direction using display 19 inside cab 30 or on another display (not shown) remote from vehicle 18. The viewer may also allow for the examination of transformed images(s) 42 against the walls of the portion of mining excavation that is mapped.
The mapping of portions of mining excavation may be conducted in relation to one or more known reference points to permit the digital assembly of the portions of mining excavation that are mapped. For example, one or more a pre-established survey points located in the tunnel and/or mine may serve as one or more common reference points for the purpose of assembling the 3D information, mesh(es) 40 and/or transformed image(s) 42 relative to each other in a digital 3D
environment.
The viewer may also be used to extrapolate the existence of some of the features of geology outlines or geotechnical rock structures and connect these features in adjacent 3D sections of the tunnel for inclusion in CAD mine models later.
[0056] The acquisition of at least two linear digital images 38 may be started by first making the proper settings to cameras 12 as explained above and calibrating cameras 12. For example, it may be necessary or desired that cameras 12 be properly aligned (e.g., at least two of cameras 12 in the same direction for the purpose of stereo imaging) prior to the image capture. Aperture settings for cameras 12 may, in some applications, be selected to maximize light but simultaneously allow for a low shutter speed to reduce the effects of vibration on the image(s) captured). The adjustment of camera settings may permit cameras 12 to pick up the reflective lighting of objects in the camera's view (face, walls, back, floor, equipment), and using the know positions of cameras 12 and calibration settings, may be used to triangulate (e.g., stereo match) pixel locations for all surfaces as described further below. The wide view allows for shadows to be cast to the different cameras which can then used to determine structural and geological features.
One or more of camera(s) 12 may be sensitive to visible light and/or to light in the infrared and/or gamma range depending on the application. Additional cameras may be provided to provide additional coverage of the mining excavation (e.g., tunnel) to avoid or reduce the number and/or size of blind spots.
[0057] Once at least two linear images 38 of a portion of mining excavation of interest have been captured from different cameras 12 at different known locations on vehicle 18, the linear images 38 (i.e. stereo images) are processes to extract 3D
information representative of the actual geometry of the mining excavation.
For example, 3D information may be extracted by using a disparity map generated in a process of stereo matching of linear images 38. It may also be necessary or desirable that a deskewing operation be performed on the linear images 38 prior to stereo matching. The deskewing operation may use stored calibration settings/parameters of cameras 12.
[0058] Using the 3D information extracted based on linear images 38 and raw data from the linear images 38, one or more transformed images 42 may be generated by shifting the pixels of the linear images 38 to their corresponding digital locations in space (corresponding to their actual locations along the walls of the tunnel). As mentioned above, the transformed images 42 may be useful in geological exploration and monitoring.
[0059] Alternatively or in addition, the 3D information extracted from linear images 38 may be used to generate 3D mesh(es) 40 representative of the geometry of the portion of mining excavation. 3D mesh(es) 40 may, for example, be in a digital format such as Drawing Exchange Format (DXF) suitable for importing into a Computer Aided Design (CAD) system.
[0060] Apparatus 10 may also be configured to remove or omit portions of linear images 38 that are of no interest from being included into mesh 40 and/or transformed image(s) 42. For example, vehicle 18 may comprise booms 28 or other implement or equipment related to mining operations that may be captured in linear images 38 but that may not be part of the geometry of the mining excavation and/or that may be of no geological relevance. Accordingly, suitable filtering may be applied during processing in order to omit such features from 3D mesh(es) 26 and/or transformed image(s) 28. Consequently, mesh(es) 40 and transformed image(s) 38 may include one or more holes 44 (e.g. blind spots) where booms 28 and/or other features may have been omitted. The omission or filtering out of known and irrelevant features captured by cameras 12 may be done by simply omitting information and/or pixels that are at distances or regions corresponding to those of the known and irrelevant features. For example, if the position of each boom 28 is known, the corresponding area(s) of linear images 38 may then be ignored for the purpose of generating mesh(es) 40 and/or transformed image(s) 42 and hence form holes 44 as shown in FIGS. 5 and 6.
[0061] The relevant information that may be missing due to holes 44 may subsequently be obtained by repeating the above process(es) using a different combination of cameras 12 positioned at different locations on vehicle 18 and having visibility of portion(s) of the mining excavation that were hidden by booms 28 in the first pair of linear images 38. Alternatively, the missing information could also be subsequently be obtained using the same two cameras 12 but after having moved (e.g., advanced) vehicle 18 and/or repositioned booms 28 in order to provide visibility of the missing portions of the mining excavation.
[0062] The exemplary portions of computer program code below represent detailed embodiments of various steps of processing that may be executed either by data processing device(s) 14 or some other data processing means external to apparatus 10. The portions of computer program code are presented for illustrative purposes only and are written in a combination of C++ and Objective C. The portions of computer program code below may be suitable for execution on a MacTM
Mini. One of ordinary skill in the art will appreciate that other suitable programming language(s) and/or other algorithms may also be suitable.
[0063] FIG. 10 shows a flow chart representative of a function performed based on the exemplary portion of code below. The exemplary portion of code below may be used for reading calibration data of cameras 12 and populate variables that may be used later to extract 3D information from a disparity image (e.g. stereo matching) created from the at least two stereo linear images 38. For example, the portion of code below may associate the calibration data of cameras 12 with respective linear images 38 that have been acquired. Accordingly, inputs to the function may include an image container (e.g., data structure) and camera calibration parameters.
An output of the function may include a rectified image container.
(TMBM PFilelrectifiedForCalibration:(TMCa libration Pa rameterslparams inWindow:(CGRect)window outputWidth:(NSInteger)outputWidth const NSInteger outputHeight = (CGFleat)outputWidth *
window.size.height / window.size.width;
TMBMPFile *ret = [[IMBMPFile alloc] initWithWidth:outputWidth height:outputHeight];
bmppixel *tdata = (bmppixe]*)ret.data;
[self calibratedlmageInto:tdata width:outputWidth height:outputHeight forCalibration:params window:window];
return ret.autorelease;
[0064] FIG. 11 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may represent a function used for deskewing the at least two linear images 38 acquired. For example, it may desirable to deskew each linear image 38 acquired in order to counteract any "fish-eye" effect introduced by the optics (e.g. wide-angle lens(es)) of cameras 12 in order to produce "flat" images that can be rectified and compared on a common plane. Deskewing may also compensate for a difference between actual and optical centres of the acquired linear images 38. The function below may take in a calibrated image (all images captured from cameras 12 are calibrated) and then applies the calibration values to deskew the image to give a true image (fish eyed effect removed). Accordingly, inputs to the function may include a container to hold the image information, width and height of a block to be calibrated, calibration parameters and a "world" container holding image information. An output of the function may include a filled image container (of a deskewed image).
-(vold)calibratedImageInto:(bmppixel*)imageDestination width:(NSinteger)blockWidth height:(NSInteger)blockHeight forCalibration:(TMCalibrationParameters*)params window:(CGRect)viewWindow ( const bmppixel black = 1255,20,20,201, white = {255,235,235,235};
const CGFloat fwidth = (CGFloat)self.width, fheight = (CGFloat)self.height, fblockWidth = (CGFloat)blockWidth, fblockHeight = (CGFloat)blockHeight;
for (jot i=0; i<blockWidth; ++i) for (lot j=0; j<blockHeight; ++j) {
// the half is because we actually want to consider the centre of the pixel, not the minx, miny corner const CGFloat tanAlpha = viewWindow.origin.x +
viewWindow.size.width * ((CGFloat)i+0.5)/fblockWidth, tanGamma = viewWindow.origin.y + viewWindow.size.height *
((CGFloat)j+0.5)/fblockHeight;
const NSPoint pixelPoint = [params convertTanToPixel:NSMakePoint(tanAlpha, tanGamma)];
if (pixelPoint.x>=0 && pixelPoint.x <= fwidth pixelPoint.y>=0 && pixelPoint.y <= fheight) {
// user is still on the original frame imageDestination[i + blockWidth * j] = [self generalPixelAtX:pixelPoint.x Y;pixelPoint.y];
} else {
// checkerboard to let the user know that they have strayed off the original frame if (i % 20 >= 10) if (j % 20 >= 10) imageDestination[i + blockWidth *j] = black;
else imageDestination[i + blockWidth *j] = white;
else if (j % 20 >= 10) imageDestination[i + blockWidth *j] = white;
else imageDestination[i + blockWidth *j] = black;
[0065] FIG. 12 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for applying a Gaussian blur to the deskewed images and saving the blurred images as rectified images in preparation for stereo matching. The application of the Gaussian blur may effectively remove jagged edges in the images and provide a smoothing effect. The application of the Gaussian blur may be desire and beneficial to subsequent stereo matching. Accordingly, an input to the function may include a radius of the blur to apply and an output of the function may include image data has been blurred (smoothed).
¨(void)gaussianBlur:(const NSInteger)radius cons t NSInteger kerne1R = radius * 2, kernelW = 2 * kerne1R + 1;
CGFloat fradius = (CGFloat)radius, *kernel = malloc(kernelW*kernelPsizeof(CGFloat));
double sum = 0;
for (int i=0; i<kerne1W; ++i) for (jilt j=0; j<kerne1W; ++j) {
coast int dx = i¨(CGFloat)(i¨kerne1R), dy = j¨(Cffloat)(j¨kerne1R), r2 = dx*dx + dy*dy;
coast double val = (CGF1oat)r2/fradius/fradius, eval = exp(¨val);
kernel[i+j*kerne1W] = eval;
sum += eval;

for (lot i=0; i<kerne1W; ++0 for (int j=0; j<kerne1W; ++j) kernel[i+j*kerne1W] /= sum;
for (NSInteger i=kerne1R; i(width¨kerne1R-1; ++i) for (NSInteger j=kerne1R; j<height¨kerne1R-1; ++j) {
CGFloat asum = 0;
for (NSInteger ii=¨kerne1R; ii<=kerne1R; +-I-ii) for (NSIntegor jj=-kerne1R; jj<=kernellt; ++jj) asum += (CGFloat)data[ii+i + .width * (jj + j)] *
kernol[ii+kernelR + (jj+kerne1R) * kerne1W];
data[i. + width * ji = (unsigned char)asum;
free (kernel);
[0066] Then, the rectified images may be subjected to a stereo matching process to create a disparity image from the rectified images.
[0067] FIG. 13 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for stereo matching of the rectified images. This process may take the two rectified (e.g. deskewed and blurred) images and compare areas of the two images to find commonalities in them. Once the commonalities are identified, the stereo matching process may create a new (i.e. disparity) image out of the two rectified images by shifting data based on the identified commonalities and merge them into the new disparity image.
[0068] In the event where more than one pair (e.g. three pairs) of cameras 12 are used, a separate disparity image may be created for each pair of cameras and then the separate disparity images may also be compared to each other. For example, the separate disparity images obtained based on the multiple pairs of cameras 12 may subsequently be merged into a single disparity image. The process of merging the separate disparity images may include looping through the disparity images and matching pixels between the disparity images 12 on a one by one basis. For example, the pixel values from a first disparity image may be compared to those of a second disparity image and, in the event where there is a value in the first disparity image and not in the second disparity image, the value of the pixel in the first disparity image may be used, or vice versa. However if a pixel value exists in both disparity images but there is discrepancy between the two values, then the average value may be used in the merged disparity image.
Accordingly, an input to the function may include two or more disparity images and an output to the function may include a merged disparity image.

WC)2013/170348 if([stereoParameters.disparityljse isEqualToString:rmerger]) //merges disparity files {
jut leftVal = 0, rightVal = 0, index = 0;
for(int i = 0; i < dispfileLC.width; ++0 for(int j = 0; j < dispfileLC.height; ++j) {
leftVal = (int)dispfileLC.datari + dispfileLC.width. *
rightVal = (int)dispfileCR.data[i + dispfileCR. width *
j];
index = i + dispfileLC.width *
//checking the pixels values for blank (black) and a value.
//if 1 disp image is blank and the other is not then it uses the 1 that is not blank //otherwise if both are blank the left most image value is used (blank) //and finally if both contain a value then the average value of the pixels are used if(leftVal == 0 && rightVal != 0) pgmFile.data[index] = dispfileCR.data[index];
else if(leftVal != 0 && rightVal == 0) pgmFile.data[index] = dispfileLC.data[index];
else if (leftVal == 0 && rightVal == 0) pgmFile.data[index] = dispfileLC.data[index];
else pgmFile.data[index] = (unsigned ehar)((int)((leftVal + rightVal)/2));
[0069] FIG. 14 shows a flow chart representative of a function performed based on the exemplary portion of code below. The portion of code below may be used for extracting 3D information from the disparity image(s) created above. This may be done by looping through the disparity image(s) and using the calibration information to calculate the 3D location of each pixel. Once the 3D location of each pixel has been determined, mesh(es) 40 may be created according to a desired tolerance.
For example mesh(es) 40 may comprise a triangular mesh produced based on the newly calculated 3D points of each pixel. A suitable tolerance may be specified to provide a relatively smooth representation of the internal surface of the mining excavation. For example a grid interval (e.g. longitudinal slice of tunnel) of around 50 inches and 8 segments (e.g. triangular elements) per interval may be suitable but a finer or coarser tolerance may be used as needed depending on the application and of the technical capabilities of the equipment used (e.g. resolution of cameras 12). Based on the 3D information of each pixel of the disparity image, transformed image(s) 42 (e.g., 3D textured map) may then be produced by repositioning each pixel of linear image(s) 38 in a digital 3D environment at its correct position (i.e.
digitally repositioning each pixel at its correct position against the inside wall of the tunnel).
[0070] Accordingly, inputs to the function below may include a disparity image (PGM file), a digital image (BMP file) (e.g., low-resolution or high-resolution, color) of the mining excavation and calibration results for the disparity image. An output of the function may include a 30 mesh 40 with a colored texture mapped onto 3D
mesh 40 (PLY file).
-(id)initWithPGMFile:(TMPGMFile*)pgmFile calibration:(TMRectifiedCalibration*)calib meshColours:(TMBMPFile*)colourfile f self = [self init];
if (self) {
const NSInteger width = pgmFile.width, height = pgmFile.height;
const unsigned char *data = pgmFile.data;
bmppixel *colourData = colourfile.data;
if (colourfile) colouredVertices = YES;
else colouredVertices = NO;
// construct array of vertices for (NSInteger j=0; j<height; ++j) {
for (NSInteger i=0; i<width; ++i) {
NSInteger disparity = data[i + width * j];
if (disparity) {
const CGFloat tanAlpha =
calib.viewport.size. width *
(CGFloat)i / (CGFloat)width + calib.viewport.origin.x, tanGamma = calib.viewport.size.height * (CGFloat)j / (CGFloat)height + calib.viewport.origin.y, dtanAlpha = calib.viewport,size.width * (CGFloat)disparity / (CGFloat)width, WC)2013/170348 z = 28.75 / dtanAlpha, x = tanAlpha * z, y = tanGamma * z;
TMPINVertex *vert = nil;
if (colouredVertices) ( const bmppixel pix = colourData[i + width*j];
vert = [[IMPLYVertex alloc] initWithX:x y:y z:z r:pix.r g:pix.g b:pix.b];
1 else ( vert = HTMPLYVertex alloc] initWithX:x 57:Y
z:zi;

[vertices addObject:vert];
[vert release];
I else i [vertices addObject:[NSNnll null]];

}
// construct triangles for (NSInteger i=0; i<width-1; ++i) for (NSInteger j=0; j<height-1; ++j) {
NSInteger bl = i + j*width, br = 1+1 + j*width, tr = 1+1 + (j+1)*width, ti = i + (j+1)*width;
TMPLYVertex *vbl = [vertices objectAtindex:b1], *vbr = [vertices objectAtIndex:br], *vtr = [vertices objectAtindex:tr], *vtl = [vertices objectAtIndex:t1];
/// clockwise oriented triangles if (![vb1 isEqual:[NSNull null]]
&& ![vbr isEqual:[NSNull null]]
&& ![vtl isEqual:[NSNull null]]) ( TMPLYTriangle *tri = [[IMPLYTriangle alloc]
initWitliVertexA:br B: hi C:t1];
[triangles addObject:tri];
[tri release];
if (![vtr isEqual:[NSNull null]]
&& ![vbr isEqual:[NSNull null]]
&& ![vtl isEqual:[NSNull null]]) TMPLITriangle *tri = [[TMPLYTriangle alloc]
initWithVertexA:t1 B:tr C:br];
[triangles addObject:tri];
[tri release];
return self;
[0071] FIG. 15 shows a flow chart illustrating method 1500 for mapping a mining excavation and also controlling at least one operation of vehicle 18.
Method 1500 may be conducted by apparatus 10 and may, for example, include:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation (see block 1502);
generating signals 16a (see FIG. 3) representative of a digital 3D
representation of the portion of the mining excavation based on the signals representative of the digital images (see block 1504); and generating signals 16b (see FIG. 3) useful in the at least one operation of vehicle 18 based on signals representative of at least one of the digital images (see block 1506).
[0072] As explained above, the generation of signals representative of the 3D
digital representation of the portion of the mining excavation (block 1504) and the generation of signals useful in the at least one operation of vehicle 18 (block 1506 may be conducted individually. At least one of the digital images may include a portion of vehicle 18, such as boom(s) 28, and the digital 3D representation of the portion of the mining excavation may excludes that the portion of vehicle 18.
Also, the signals 16b useful in the at least one operation of vehicle 18 may be useful in controlling boom(s) 28. In method, 1500, the digital 3D representation of the portion of the mining excavation comprise 3D mesh(es) 40. Alternatively or in addition, the digital 3D representation of the portion of the mining excavation may comprise at least one of the digital images transformed according to 3D mesh(es) 40 to form transformed image 42.
[0073] As mentioned above, The at least two digital images may be low-resolution digital images and the digital 3D representation of the portion of the mining excavation comprises 3D mesh(es) 40 based on the low-resolution digital images. The image used to produce transformed image 42 may be one of the two low-resolution images or may be a separate high-resolution image obtained from one of cameras 12. In any case, the digital 3D representation may generated based on stereo matching of at least two digital images as described above.
[0074] FIG. 16 shows a flow chart illustrating method 1600 for generating a digital 3D representation of a mining excavation based on digital images and excluding a portion of vehicle 18 captured in the digital images. Method 1600 may, for example, include:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of vehicle 18 (see block 1602); and generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation excluding the portion of vehicle 18 (see block 1604).
[0075] FIG. 17 shows a flow chart illustrating method 1700 for generating a digital 3D representation of a mining excavation based on low-resolution and high-resolution digital images. Method 1700 may, for example, include:
receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation (see block 1702);

generating signals representative of digital 3D mesh(es) 40 of at least a portion of the mining excavation based on the low-resolution digital images (see block 1704);
receiving signals representative of a high-resolution digital image of the common portion of the mining excavation (see block 1706); and transforming the high-resolution digital image according to the 3D mesh (see block 1708).
[0076] Methods 1500, 1600 and 1700 may be performed by apparatus 10 in accordance with and in combination with the various aspects of the present disclosure.
[0077] The above apparatus and methods may be used in underground mining or other applications. For example, the disclosed apparatus and methods may be used for: orientating and setting up of production drills to help improve accuracy and consistency of target achievement of the planned drill layout; tracking haulage trucks and/or loaders (also known as scoops and scoop trams), and/or their components (e.g., dump boxes, load and dump buckets), during transit and/or operation; tracking position and movement of components such as chutes, skips and/or load and dump pockets in the shaft process; tracking of robotic machinery to provide positional data useable by the machinery to move itself to specific locations and/or orientations; tracking the motion of the booms of the jumbo drilling unit for purposes other than drilling accuracy, for example to help ensure the non-conflict of the booms or to help optimize the utilization of the location of the booms for the work being completed (e.g., bolting, shotcreting, screening, or material handling).
[0078] The above description is meant to be exemplary only, and one skilled in the relevant arts will recognize that changes may be made to the embodiments described without departing from the scope of the invention disclosed. For example, the blocks and/or operations in the flowcharts and drawings described herein are for purposes of example only. There may be many variations to these blocks and/or operations without departing from the teachings of the present disclosure. For instance, the blocks may be performed in a differing order, or blocks may be added, deleted, or modified. The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. Also, one skilled in the relevant arts will appreciate that while the apparatus and devices disclosed and shown herein may comprise a specific number of elements/components, the apparatus and devices could be modified to include additional or fewer of such elements/components. For example, while any of the elements/components disclosed may be referenced as being singular, it is understood that the embodiments disclosed herein could be modified to include a plurality of such elements/components. The present disclosure is also intended to cover and embrace all suitable changes in technology. Modifications which fall within the scope of the present invention will be apparent to those skilled in the art, in light of a review of this disclosure, and such modifications are intended to fall within the appended claims.

Claims (55)

WHAT IS CLAIMED IS:
1. An apparatus for installation on a vehicle, mapping a mining excavation and also controlling at least one operation of the vehicle, the apparatus comprising:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of the digital images captured by the first camera and the second camera;
generate signals representative of a digital 3D representation of the portion of the mining excavation based on the captured digital images, the digital 3D representation of the portion of the mining excavation including a 3D mesh; and generate signals for the operation of the vehicle based on at least one of the captured digital images.
2. The apparatus as defined in claim 1, wherein at least one of the first field of view and the second field of view includes a portion of the vehicle and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the vehicle from the digital 3D
representation of the portion of the mining excavation.
3. The apparatus as defined in claim 2, wherein the portion of the vehicle includes a movable implement and the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of the movable implement.
4. The apparatus as defined in claim 3, wherein the movable implement includes a drill boom.
5. The apparatus as defined in any one of claims 1 to 4, wherein the digital 3D
representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.
6. The apparatus as defined in any one of claims 1-5, wherein the generation of signals representative of the 3D digital representation of the portion of the mining excavation and the generation of signals for the operation of the vehicle are conducted individually by the data processor.
7. The apparatus as defined in claim 1, wherein:
the first camera and the second camera are configured to capture low-resolution digital images; and the data processor is responsive to machine-readable instructions causing the data processor to generate the 3D mesh based on the low-resolution digital images.
8. The apparatus as defined in claim 7, wherein:
at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution digital image of the common portion of the first field of view and the second field of view; and the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.
9. The apparatus as defined in any one of claims 1-8, wherein the signals representative of the digital 3D representation are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.
10. An apparatus for installation on a vehicle and mapping a mining excavation, the apparatus comprising:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion, at least one of the first field of view and the second field of view being configured to include a portion of the vehicle; and a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of the digital images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the mining excavation excluding the portion of the vehicle included in the at least one first field of view and the second field of view.
11. The apparatus as defined in claims 10, wherein:
the first camera and the second camera are configured to capture low-resolution digital images; and the data processor is responsive to machine-readable instructions causing the data processor to generate a digital 3D mesh of at least a portion of the mining excavation based on the low-resolution digital images.
12. The apparatus as defined in claim 11, wherein:
at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution digital image of the common portion of the first field of view and the second field of view; and the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.
13. The apparatus as defined in any one of claims 10-12, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of the vehicle based on at least one of the captured digital images.
14. The apparatus as defined in any one of claims 10-13, wherein the digital 3D
representation of the portion of the mining excavation comprises a 3D mesh.
15. The apparatus as defined in claim 14, wherein the digital 3D
representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.
16. An apparatus for installation on a vehicle and mapping a mining excavation, the apparatus comprising:
a first camera and a second camera, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of low-resolution digital images of at least a portion of the mining excavation captured by the first camera and the second camera;
generate signals representative of a digital 3D mesh of at least the portion of the mining excavation based on the low-resolution digital images;
receive signals representative of a high-resolution image captured by at least one of the first camera, the second camera and a third camera, the high-resolution image being of the common portion the first field of view and the second field of view; and transform the high-resolution digital image according to the 3D mesh.
17. The apparatus as defined in claim 16, wherein at least one of the first field of view and the second field of view is configured to include a portion of the vehicle and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the vehicle from the 3D mesh.
18. The apparatus as defined in any one of claims 16 and 17, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of the vehicle based on at least one of the captured digital images.
19. The apparatus as defined in any one of claims 16 and 17, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of a drilling implement of the vehicle based on at least one of the captured digital images.
20. The apparatus as defined in any one of claims 16-19, wherein the signals representative of the 3D mesh are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.
21. A vehicle comprising the apparatus as defined in any one of claims 1-20.
22. A vehicle for conducting drilling in an underground environment, the vehicle comprising the apparatus as defined in any one of claims 1-20.
23. A vehicle for conducting drilling in an underground environment, the vehicle comprising:
a drilling implement;
a first camera and a second camera configured to capture digital images of at least a portion of the underground environment, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the underground environment based on the captured digital images, the digital 3D representation of the portion of the underground environment including a 3D mesh.
24. The vehicle as defined in claim 23, wherein at least one of the first field of view and the second field of view includes at least a portion of the drilling implement.
25. The vehicle as defined in claim 24, wherein the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the movable drilling implement from the digital representation of the portion of the underground environment.
26. The vehicle as defined in any one of claims 23-25, wherein the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of the drilling implement based on the captured digital images.
27. The vehicle as defined in claim 26, wherein the generation of signals representative of a digital 3D representation of the portion of the underground environment and the generation of signals for the operation of the drilling implement are conducted individually by the data processor.
28. The vehicle as defined in any one of claims 23-27, wherein:
the first camera and the second camera are configured to capture low-resolution digital images;
the data processor is responsive to machine-readable instructions causing the data processor to generate the 3D mesh based on the low-resolution digital images;
at least one of the first camera, the second camera and a third camera is configured to capture a high-resolution image of the common portion the first field of view and the second field of view; and the data processor is responsive to machine-readable instructions causing the data processor to transform the high-resolution digital image according to the 3D mesh.
29. The vehicle as defined in any one of claims 23-28, wherein the signals representative of the digital 3D representation are generated based on stereo matching of a digital image captured by the first camera and a digital image captured by the second camera.
30. The vehicle as defined in any one of claims 23-29, comprising one or more standard operating lights and the cameras are configured so that the common portion of the first field of view and the second field of view is illuminated by the one or more standard operating lights.
31. A method for mapping a mining excavation and also controlling at least one operation of a vehicle, the method performed by a data processor and comprising:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation;
generating signals representative of a digital 30 representation of the portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the portion of the mining excavation including a 3D
mesh; and generating signals for the at least one operation of the vehicle based on signals representative of at least one of the digital images.
32. The method as defined in claim 31, wherein at least one of the digital images includes a portion of the vehicle and the digital 3D representation of the portion of the mining excavation excludes the portion of the vehicle.
33. The method as defined in claim 32, wherein the portion of the vehicle includes a drill boom and the signals for the at least one operation of the vehicle are for controlling the drill boom.
34. The method as defined in any one of claims 31 to 33, wherein the digital 3D
representation of the portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.
35. The method as defined in any one of claims 31-34, wherein the generation of signals representative of the 3D digital representation of the portion of the mining excavation and the generation of signals for the at least one operation of the vehicle are conducted individually.
36. The method as defined in claim 31, wherein:
the at least two digital images are low-resolution digital images; and the 3D mesh is based on the low-resolution digital images.
37. The method as defined in claim 36, comprising:
receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and transforming the high-resolution digital image according to the 3D mesh.
38. The method as defined in any one of claims 31-37, wherein the signals representative of the digital 3D representation are generated based on stereo matching of the at least two digital images
39. A method for mapping a mining excavation, the method performed by a data processor mounted to a vehicle, the method comprising:
receiving signals representative of at least two digital images of at least a common portion of the mining excavation, at least one of the digital images including a portion of the vehicle; and generating signals representative of a digital 3D representation of the common portion of the mining excavation based on the signals representative of the digital images, the digital 3D representation of the common portion of the mining excavation excluding the portion of the vehicle.
40. The method as defined in claim 39, wherein:
the at least two digital images are low-resolution digital images; and the digital 3D representation of the common portion of the mining excavation comprises a 3D mesh based on the low-resolution digital images.
41. The method as defined in claim 40, comprising:
receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and transforming the high-resolution digital image according to the 3D mesh.
42. The method as defined in any one of claims 39-41, comprising generating signals for the at least one operation of the vehicle based on signals representative of at least one of the digital images.
43. The method as defined in claim 39, wherein the digital 3D
representation of the common portion of the mining excavation comprises a 3D mesh.
44. The method as defined in claim 43, wherein the digital 3D
representation of the common portion of the mining excavation comprises at least one of the digital images transformed according to the 3D mesh.
45. A method for mapping a mining excavation, the method performed by a data processor mounted to a vehicle, the method comprising:
receiving signals representative of at least two low-resolution digital images of at least a common portion of the mining excavation;
generating signals representative of a digital 3D mesh of the common portion of the mining excavation based on the low-resolution digital images;
receiving signals representative of a high-resolution digital image of the common portion of the mining excavation; and transforming the high-resolution digital image according to the 3D mesh.
46. The method as defined in claim 45, wherein at least one of the digital images includes a portion of the vehicle and the 3D mesh excludes the portion of the vehicle.
47. The method as defined in any one of claims 45 and 46, comprising generating signals for the operation of the vehicle based on at least one of the captured digital images.
48. The method as defined in any one of claims 45-47, comprising generating signals for the operation of a drilling implement of the vehicle based on at least one of the captured digital images.
49. The method as defined in any one of claims 45-48, wherein the signals representative of the 3D mesh are generated based on stereo matching of the at least two low-resolution digital images.
50. An apparatus for installation on a vehicle, mapping a mining excavation and also controlling at least one operation of the vehicle, the apparatus comprising:
a first camera and a second camera configured to capture digital images of at least a portion of the mining excavation, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of the digital images captured by the first camera and the second camera;
generate signals representative of a digital 3D representation of the portion of the mining excavation based on the captured digital images; and generate signals for the operation of the vehicle based on at least one of the captured digital images;
wherein at least one of the first field of view and the second field of view includes a portion of the vehicle and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the vehicle from the digital 3D representation of the portion of the mining excavation.
51. The apparatus as defined in claim 50, wherein the portion of the vehicle includes a movable implement and the data processor is responsive to machine-readable instructions causing the data processor to generate signals for the operation of the movable implement.
52. The apparatus as defined in claim 51, wherein the movable implement includes a drill boom.
53. A vehicle for conducting drilling in an underground environment, the vehicle comprising:
a drilling implement;
a first camera and a second camera configured to capture digital images of at least a portion of the underground environment, the first camera having a first field of view and the second camera having a second field of view, the first field of view and the second field of view having a common portion;
a data processor in communication with the first camera and the second camera, the data processor being responsive to machine-readable instructions causing the data processor to:
receive signals representative of images captured by the first camera and the second camera; and generate signals representative of a digital 3D representation of the portion of the underground environment based on the captured digital images;
wherein;
at least one of the first field of view and the second field of view includes at least a portion of the drilling implement; and the data processor is responsive to machine-readable instructions causing the data processor to exclude the portion of the movable drilling implement from the digital representation of the portion of the underground environment.
54. A method for mapping a mining excavation and also controlling at least one operation of a vehicle, the method performed by a data processor and comprising:

receiving signals representative of at least two digital images of at least a common portion of the mining excavation;
generating signals representative of a digital 3D representation of the portion of the mining excavation based on the signals representative of the digital images;
and generating signals for the at least one operation of the vehicle based on signals representative of at least one of the digital images;
wherein at least one of the digital images includes a portion of the vehicle and the digital 3D representation of the portion of the mining excavation excludes the portion of the vehicle.
55. The method as defined in claim 54, wherein the portion of the vehicle includes a drill boom and the signals for the at least one operation of the vehicle are for controlling the drill boom.
CA2912432A 2012-05-15 2013-03-28 Mapping of mining excavations Active CA2912432C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261647337P 2012-05-15 2012-05-15
US61/647,337 2012-05-15
PCT/CA2013/000307 WO2013170348A1 (en) 2012-05-15 2013-03-28 Mapping of mining excavations

Publications (2)

Publication Number Publication Date
CA2912432A1 CA2912432A1 (en) 2013-11-21
CA2912432C true CA2912432C (en) 2020-09-15

Family

ID=49582925

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2912432A Active CA2912432C (en) 2012-05-15 2013-03-28 Mapping of mining excavations

Country Status (2)

Country Link
CA (1) CA2912432C (en)
WO (1) WO2013170348A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4230837A1 (en) * 2022-02-18 2023-08-23 Sandvik Mining and Construction Lyon SAS Apparatus for position detection, mine vehicle and method
EP4343479A1 (en) * 2022-09-20 2024-03-27 Sandvik Mining and Construction Oy Environment related data management for a mobile mining vehicle

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3094806B1 (en) * 2014-01-14 2019-07-24 Sandvik Mining and Construction Oy Mine vehicle and method of initiating mine work task
WO2015106799A1 (en) * 2014-01-14 2015-07-23 Sandvik Mining And Construction Oy Mine vehicle, mine control system and mapping method
US10822773B2 (en) * 2015-10-05 2020-11-03 Komatsu Ltd. Construction machine and construction management system
JP6887229B2 (en) * 2016-08-05 2021-06-16 株式会社小松製作所 Construction management system
US11280192B2 (en) 2016-12-02 2022-03-22 1854081 Ontario Ltd. Apparatus and method for preparing a blast hole in a rock face during a mining operation
WO2018191602A1 (en) 2017-04-13 2018-10-18 Joy Global Underground Mining Llc System and method for measuring and aligning roof bolts
DE112018001447T5 (en) * 2017-07-14 2019-12-12 Komatsu Ltd. TOPOGRAPHIC INFORMATION TRANSFER DEVICE, BUILDING MANAGEMENT SYSTEM AND TOPOGRAPHICAL INFORMATION TRANSFER METHOD
FR3074214B1 (en) * 2017-11-24 2021-12-17 Dodin Campenon Bernard CONSTRUCTION PROCESS OF A DIGITAL MODEL RELATING TO A CONSTRUCTION SITE OF AN UNDERGROUND WORK
CN109372503A (en) * 2018-11-21 2019-02-22 宁夏广天夏电子科技有限公司 Coalcutter video acquisition connection damping trolley
KR20210000593A (en) * 2019-06-25 2021-01-05 두산인프라코어 주식회사 Apparatus for generating environment data neighboring construction equipment and construction equipment including the same
CN113743206B (en) * 2021-07-30 2024-04-23 洛伦兹(宁波)科技有限公司 Mine car charging control method, device, system and computer readable medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6296317B1 (en) * 1999-10-29 2001-10-02 Carnegie Mellon University Vision-based motion sensor for mining machine control
AU2009200859B2 (en) * 2008-03-04 2014-08-07 Technological Resources Pty. Limited Scanning system for 3D mineralogy modelling

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4230837A1 (en) * 2022-02-18 2023-08-23 Sandvik Mining and Construction Lyon SAS Apparatus for position detection, mine vehicle and method
WO2023156213A1 (en) * 2022-02-18 2023-08-24 Sandvik Mining And Construction Lyon Sas Apparatus for position detection, mine vehicle and method
EP4343479A1 (en) * 2022-09-20 2024-03-27 Sandvik Mining and Construction Oy Environment related data management for a mobile mining vehicle
WO2024061996A1 (en) * 2022-09-20 2024-03-28 Sandvik Mining And Construction Oy Environment related data management for a mobile mining vehicle

Also Published As

Publication number Publication date
CA2912432A1 (en) 2013-11-21
WO2013170348A1 (en) 2013-11-21

Similar Documents

Publication Publication Date Title
CA2912432C (en) Mapping of mining excavations
US20220101552A1 (en) Image processing system, image processing method, learned model generation method, and data set for learning
EP2913796B1 (en) Method of generating panorama views on a mobile mapping system
JP6807781B2 (en) Display system, display method, and remote control system
AU2015234395A1 (en) Real-time range map generation
US20190093320A1 (en) Work Tool Vision System
US11846091B2 (en) System and method for controlling an implement on a work machine using machine vision
CA2967174A1 (en) Localising portable apparatus
CN113874586A (en) Ground engaging tool monitoring system
JP7023813B2 (en) Work machine
JP2019164136A (en) Information processing device, image capturing device, mobile body, image processing system, and information processing method
KR101875047B1 (en) System and method for 3d modelling using photogrammetry
JP2014228941A (en) Measurement device for three-dimensional surface shape of ground surface, runnable region detection device and construction machine mounted with the same, and runnable region detection method
CN115423958A (en) Mining area travelable area boundary updating method based on visual three-dimensional reconstruction
Thoeni et al. Use of low-cost terrestrial and aerial imaging sensors for geotechnical applications
Jing et al. 3D reconstruction of underground tunnel using Kinect camera
CN115984491A (en) Geological model construction method for detecting stress of limonite
CN114821496A (en) Visual covering for providing depth perception
Whitehorn et al. Stereo vision in LHD automation
JP7107792B2 (en) construction machinery
Bauer et al. Tunnel surface 3d reconstruction from unoriented image sequences
Alshawabkeh et al. Laser scanning and photogrammetry: A hybrid approach for heritage documentation
Paar et al. Texture-based fusion between laser scanner and camera for tunnel surface documentation
Kochi et al. 3D-Measuring-Modeling-System based on Digital Camera and PC to be applied to the wide area of Industrial Measurement
US20230340759A1 (en) Work vehicle having controlled transitions between different display modes for a moveable area of interest

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20180301