Introduction
The invention relates to monitoring of a subsea structure. More specifically the invention is defined by a method for detecting position and orientation of a subsea structure.
Background
When performing lifting or lowering operations of different types of subsea structures, it may be essential to know the exact position and orientation of the structure, e.g. if it is in level or out of level. This is specifically the case for big and heavy structures and for installation operations where one subsea structure is to be connected to another structure.
It may also be important to keep already installed subsea equipment and structures under surveillance after installation to see if they are moving or offset of a set position.
A subsea structure can be any type of structure installed or being operated subsea. Lifting or lowering operations of such structures are typically performed by a vessel with lifting arrangements, e.g. winch, crane or hoist. The subsea structure may for instance be a structure that is to be installed on the seabed or removed from the seabed. Such structures may be large and heavy.
Large structures being handled under water are typically related to subsea oil and gas installations. Examples of such are BOP, riser and pipelines, frame structures, anchors and suction piles. Large structures may however also be related to the fish farm industry where large frames are positioned offshore or wind turbines with large substructures. When handling such large and heavy structures in a lifting or lowering operation it may be vital to know the exact position and orientation during the operation.
WO 02086288 Al describes a method and apparatus for monitoring the position of parts of a subsurface tool. Position is monitored by detecting acoustic emission generated in the tool. The method is depending on acoustic emission, and will not work if the subsea structure being operated does not generate any acoustic emission. The method is thus not suited for detecting position and rotation of a stand-alone structure being lifted or lowered subsea.
ISHIDA, M et al "Marker based camera pose estimation for underwater robots" published 2012 IEEE/SICE International Symposium on System Integration (SII), Kyushu University, Fokuoka, Japan, 16-18 December 2012, pages 629-634, INSPEC Accession Number: 13286782, DOI: 10.1109/SII.2012.6427353, describes a marker based pose estimation for underwater robots by using cameras. A method for estimating position and orientation of an underwater robot performing complicated task is described. It is however not described a simple way of tracking a marker or structure and using this for controlling the position of a robot with a camera relative to the marker or structure to be tracked.
There is a need for a simple and accurate method for monitoring a subsea structure while it is lifted or lowered subsea as well as after it has been installed.
The present invention provides a cost-efficient solution where a standard remote operated vehicle, ROV, equipped with at least one camera is used in a novel method for monitoring subsea operations.
Short description of the invention
The present invention comprises a method for monitoring a subsea structure for detecting position and orientation of the structure. The method is defined by: providing a ROV with at least a first camera generating a video stream of
pictures of the structure to be monitored;
controlling the ROV such that the camera is always directed at the structure;
locking the focus of the camera for tracking a specific visual feature of the
structure and video recording said feature over time;
interpreting the video stream in a video processor by calculating position and rotation of said visual feature for detecting position and orientation of the structure relative to position the ROV, and linking software controlling said at least one camera, tracking the specific visual feature, to software controlling the ROV such that the ROV is kept still at a distance the camera is locket to, for performing accurate and continuous tracking of position and orientation of said feature.
Further features of the method are defined in the dependent claims.
The invention is further defined by a computer program håving instructions which when executed by a computer device or system cause the computing device or system to perform a method.
Detailed description of the invention
The invention will now be described in detail with reference to the drawings illustration different embodiments: Figure 1 is a sketch of a ROV with cameras and a structure to be monitored; Figure 2 illustrates the concept where a tracing program is locking a camera to a specific visual feature of a structure; Figure 3 shows a zoomed-in section of the visual feature that a program is instructed to track, and
Figure 4 shows a time series of the horizontal position of the tracked feature.
The invention is defined by a method for monitoring a subsea structure for detecting position and orientation of the structure. The method comprises several steps.
One step is providing a ROV with a first camera filming the structure to be monitored. The camera can be a camera which is already integrated in the ROV, or it may be retrofitted by attaching the camera to the body of the ROV. By video recording with one camera, in-plane (2D) measurements of the structure can be performed.
In one embodiment of the invention, the ROV is provided with a second camera. By video recording the specific visual feature of the structure with said first and second cameras, spatial measurements (3D) of the structure can be provided.
This is possible by configuring the first and second camera to lock and focus on to the same specific visual feature of the structure. In order to obtain 3D measurements, the optical axes of each camera must be separated. The cameras may for instance be mounted on each side of a ROV.
Video recorded from two or more cameras will be synchronised for providing the 3D measurements.
Figure 1 is a sketch illustrating a set-up used when performing the method with a ROV 20, located above seabed 50, håving two cameras 30, 35 and a structure 10 to be monitored.
Cameras are always directed at a structure 10 to be monitored by controlling the ROV 20. This may be performed automatically by letting the camera 30 detect the moving structure 10 to be monitored and direct its field of view towards the moving structure 10 automatically. One way of controlling this operation is to let tracking software tracking said specific visual feature 40 also control the positioning of the ROV 20. This is further described below.
When a camera 30 is directed at the structure 10 to be monitored, the next step is locking the focus of the camera 30 to a specific visual feature 40 of the structure 10 and video recording the feature 40 of the structure 10. The specific visual feature 40 may be a feature 40 that is already a part of the structure 10 to be monitored, or it may be a reference mark provided in the form of a marker, label, magnetic sticker or paint that is attached to the structure 10 for the purpose of monitoring it while performing lifting or lowering operations. These will all act as a reference mark that easily can be detected and recognized in a video processor.
Figure 2 illustrates the concept where a tracing program in the video processor is locking a camera 30 to a specific visual feature 40 of a structure 10, while Figure 3 shows the feature to be tracked in Figure 2.
How the tracking software is operating for detecting movements of a specific visual feature 40 is regarded as known prior art within pattern recognition and will not be described in detail.
Figure 4 shows a time series of the horizontal position of the tracked feature 40. The x-axis shows time in seconds, while the y-axis shows the horizontal displacement in mm of the tracked feature 40.
Focusing on only one visual feature 40 or reference mark on a subsea structure 10 will enable the invention and make it possible to monitor and detect position and rotation. In one embodiment two or three reference marks may be placed on the structure 10. These can then be spaced apart to a certain degree as long as they all are in the focal view of the one or more cameras 30 used.
Using more that one reference mark on the structure 10 may improve the sensitivity when detecting small movements. Multiple markers are also used to measure relative distances and rotations between markers.
By placing reference marks on different sides of a structure 10, the structure 10 do not have to be in a specific position relative to a ROV 20 video filming it before locking the focus of a camera 30 and starting interpretation of the video stream.
A combination of using natural visual features 40 and attached reference marks for monitoring a subsea structure 10 is also feasible according to the invention.
The last step of the inventive method is interpreting the video stream in a video processor by calculating position and orientation of said visual features 40 for detecting position and orientation of the structure 10 relative to the ROV 20, including calculation of all the 6 degrees of freedom, i.e. directions x, y, z and rotation 1,2 and 3. In order to do this, the video processor comprises tracking software for monitoring and tracking the defined visual feature 40, and thereby position and orientation of the structure 10.
In one embodiment of the invention, the tracking software tracking the specific visual feature 40 is linked to software controlling positioning of the ROV 20. In this way a ROV 20 with a camera 30 locked to a defined visual feature 40 of a structure 10 can be kept still at a specific and optimal distance for performing accurate and continuous tracking of position and orientation of said feature 40.
When performing a lifting or lowering operation of a subsea structure 10, the position and orientation of the structure 10 relative to a ROV 20 may be irrelevant. The vital information for an operator of a lifting and lowering operation of a structure 10 may be the current position and orientation of the structure 10 relative to the sea floor. This information will be available if the ROV 20 is positioned still on the seabed 50. If however the ROV 20 is moving, further steps must be tåken in order to find the current position and orientation of the structure 10 relative to the sea floor.
According to one embodiment of the invention, finding the position and orientation of the structure 10 relative to the sea floor is possible, even if the ROV 20 is moving around subsea, by using the ROVs 20 positioning sensors, such as short baseline acoustic positioning system (BSL), depth sensor, accelerometer and gyroscope, for calculating position and orientation of the ROV 20 relative to the seabed 50, and then combining these calculations with the calculation of orientation of the visual feature 40 of the structure 10 for determining the position and orientation of the structure 10 relative to seabed 50.
Irrespective of which type of specific visual features 40 that are used when performing the described method, the video processor must know the appearance of the feature 40 prior to calculating position and orientation.
There are several different ways of doing this. One way is letting an operator of the ROV 20 control the camera 30 for zooming in on a specific visual feature 40 prior to inputting an instruction telling a video processor that the zoomed-in feature 40 is the one to lock to and use in the calculations.
Another way is inputting, to the video processor, information defining the specific visual feature 40 to focus on prior to performing a monitoring operation. A ROV 20 system may then operate autonomously by first detecting a subsea structure 10 to be monitored, then focus and lock to the specific visual feature 40 before performing position and orientation calculations.
The above described method for monitoring a subsea structure 10 can be controlled and executed by a computer program håving instructions which when executed by a computer device or system will cause the computing device or system to perform the method.
The program can be executed in a computer device installed in the ROV 20. It can further be linked to a video processor and a controlling device for controlling movements of the ROV 20. Resulting monitoring information will then be sent/streamed from the ROV 20 to be displayed at a remote location.
The camera 30 used for performing the inventive method can be connected to a video processor for processing and transmission of monitoring results. Recorded video may also be transmitted elsewhere for being processed in a remote located video processor. When recorded video is being processed, the video processor recognises the defined visual features 40 in the video and calculates its orientation and position as a function of time. Signal processing is used to ensure stable performance and to remove vibrations from the ROV 20 holding the camera 30. Either way, real-time presentation of the results of position and orientation data of the structure 10 is presented to an operator controlling lifting and lowering operations of the structure 10.
The present invention provides a novel and efficient method for monitoring a subsea structure 10 for detecting position and orientation. It is well suited for lifting and lowering operation, but just as well suited for inspection and surveying purposes.