CN117788572A - Fused ship precise positioning method and system of optical image and laser point cloud - Google Patents

Fused ship precise positioning method and system of optical image and laser point cloud Download PDF

Info

Publication number
CN117788572A
CN117788572A CN202311652569.1A CN202311652569A CN117788572A CN 117788572 A CN117788572 A CN 117788572A CN 202311652569 A CN202311652569 A CN 202311652569A CN 117788572 A CN117788572 A CN 117788572A
Authority
CN
China
Prior art keywords
point cloud
optical image
coordinate system
positioning
laser point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311652569.1A
Other languages
Chinese (zh)
Inventor
孙棪伊
樊田峥
穆为民
杜哲
顾村峰
司马珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Electromechanical Engineering
Original Assignee
Shanghai Institute of Electromechanical Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Electromechanical Engineering filed Critical Shanghai Institute of Electromechanical Engineering
Priority to CN202311652569.1A priority Critical patent/CN117788572A/en
Publication of CN117788572A publication Critical patent/CN117788572A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a fused ship fine positioning method and system of an optical image and a laser point cloud, comprising the following steps: step S1: constructing a three-dimensional scene model of the target area; step S2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds; step S3: and establishing and unifying a coordinate system to finish positioning. The method is suitable for ship target detection and identification, can perform feature identification and extraction by applying fusion point cloud and multi-vision mode in offshore field scene, and further realizes target positioning; the method is mainly used for realizing the functions of identifying and positioning ships familiar with the near sea areas in the model, assisting in completing automatic guidance of the arrival and departure of the fishing and cargo ships, and remotely realizing rapid arrival and departure of large-scale ships.

Description

Fused ship precise positioning method and system of optical image and laser point cloud
Technical Field
The invention relates to the field of application methods, in particular to a fused ship fine positioning method and system of an optical image and a laser point cloud.
Background
In recent years, the requirements for identifying and positioning ships in offshore areas are increasing, and along with the key breakthrough of the deep learning direction in the artificial intelligence field, the deep learning method is also widely applied to ship identification. Compared with the traditional ship detection and identification method, the ship target detection and identification method based on deep learning has stronger nonlinear fitting capability and is more suitable for offshore ship target detection and identification in complex scenes.
The generalized deep learning model can have better detection and recognition capability on the target through training of a large amount of sample data, has good maintainability, and can realize the improvement of detection and recognition effects through retraining the model without redesigning the model after the subsequent samples are continuously accumulated.
The traditional ship target detection method is mostly based on the characteristics of manual design, has low detection efficiency on ship targets in offshore relatively complex scenes, and has higher false alarm rate and lower accuracy on ship target detection under the influence of interference of sea wave interference, similar-shape object interference and high-resolution images in offshore scenes, such as shoreside facilities, containers, vehicles and the like.
In chinese patent document with publication number CN109584279a, a ship detection method, device and ship detection system based on SAR image are disclosed, comprising: step 1, acquiring SAR image data set A for training and testing a model and SAR image data B for testing the robustness of a ship detection model; step 2, intercepting the area containing the ship from the data of the SAR image data set A to obtain a plurality of grid images containing the ship and constructing a data set for training an SSD model; step 3, performing transfer learning on the constructed training and verification data set of ship detection by using an SSD model trained on the PASCAL VOC data set; and 4, detecting test data of the SAR image data set A and detecting ship positions in different background areas in the SAR image data set B by using a learning result of the SSD model. Although the patent document adopts a method of transfer learning to overcome the problem of insufficient training data, the improvement of accuracy is still insufficient, and therefore the problem cannot be solved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a fused ship fine positioning method and system of an optical image and a laser point cloud.
The invention provides a fused ship fine positioning method of an optical image and a laser point cloud, which comprises the following steps:
step S1: constructing a three-dimensional scene model of the target area;
step S2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds;
step S3: and establishing and unifying a coordinate system to finish positioning.
Preferably, the step S2 includes the following substeps:
step S2.1: acquiring sparse point clouds of a target area by adopting a visual SLAM technology, and reconstructing a three-dimensional scene;
the sparse point cloud consists of points with space three-dimensional coordinates;
step S2.2: performing dense point cloud feature description by adopting SLAM laser point cloud technology;
step S2.3: extracting texture and space geometric information from the optical image;
the optical image is acquired by an unmanned aerial vehicle-mounted optical camera.
Preferably, the step S2.3 includes using an affine invariance feature extraction operator; the extraction comprises the steps of growing the characteristic point areas and calculating area shape parameters; performing affine deformation correction after redundant scale removal; extracting and describing characteristics, and determining candidate areas when matching is successful; and performing gross error elimination based on the matching of geometric consistency.
Preferably, the coordinate system comprises a laser scanning device coordinate system, an optical camera coordinate system, a navigation device coordinate system and a body coordinate system of the unmanned aerial vehicle; the unification of the coordinate system is completed by a pre-calibration method.
Preferably, the step S3 performs the registration and calculation of the optical image and the laser point cloud by combining the image and the three-dimensional point cloud to identify various static or dynamic targets in the scene for positioning.
Preferably, the related information of the target area is acquired through an unmanned aerial vehicle; the unmanned aerial vehicle is provided with laser equipment and an optical camera; the three-dimensional scene model is constructed from satellites and uses GNSS signals.
Preferably, if the GNSS signals are not interfered, the unmanned aerial vehicle is navigated by using a GNSS or IMU.
Preferably, if the GNSS signal is disturbed, the ground control point is obtained by automatically matching the image captured by the unmanned aerial vehicle with the existing scene model, and the precise pose of the photographic image is resolved.
The invention provides a fused ship fine positioning system of an optical image and a laser point cloud, which comprises the following components:
module M1: constructing a three-dimensional scene model of the target area;
module M2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds;
module M3: and establishing and unifying a coordinate system to finish positioning.
Preferably, the module M2 comprises the following sub-modules:
module M2.1: acquiring sparse point clouds of a target area by adopting a visual SLAM technology, and reconstructing a three-dimensional scene;
the sparse point cloud consists of points with space three-dimensional coordinates;
module M2.2: performing dense point cloud feature description by adopting SLAM laser point cloud technology;
module M2.3: extracting texture and space geometric information from the optical image;
the optical image is acquired by an unmanned aerial vehicle-mounted optical camera.
Preferably, said module M2.3 comprises employing an affine invariance feature extraction operator; the extraction comprises the steps of growing the characteristic point areas and calculating area shape parameters; performing affine deformation correction after redundant scale removal; extracting and describing characteristics, and determining candidate areas when matching is successful; and performing gross error elimination based on the matching of geometric consistency.
Preferably, the coordinate system comprises a laser scanning device coordinate system, an optical camera coordinate system, a navigation device coordinate system and a body coordinate system of the unmanned aerial vehicle; the unification of the coordinate system is completed by a pre-calibration method.
Preferably, the module M3 performs the registration and calculation of the optical image and the laser point cloud by combining the image and the three-dimensional point cloud to identify various static or dynamic targets in the scene.
Preferably, the related information of the target area is acquired through an unmanned aerial vehicle; the unmanned aerial vehicle is provided with laser equipment and an optical camera; the three-dimensional scene model is constructed from satellites and uses GNSS signals.
Preferably, if the GNSS signals are not interfered, the unmanned aerial vehicle is navigated by using a GNSS or IMU.
Preferably, if the GNSS signal is disturbed, the ground control point is obtained by automatically matching the image captured by the unmanned aerial vehicle with the existing scene model, and the precise pose of the photographic image is resolved.
Compared with the prior art, the invention has the following beneficial effects:
1. the method is suitable for ship target detection and identification, can perform feature identification and extraction by applying fusion point cloud and multi-vision mode in offshore field scene, and further realizes target positioning; the method is mainly used for realizing the functions of identifying and positioning ships familiar with the near sea areas in the model, assisting in completing automatic guidance of the arrival and departure of the fishing and cargo ships, and remotely realizing rapid arrival and departure of large-scale ships.
2. The method can quickly familiarize with offshore scenes, and a three-dimensional scene model is constructed in real time by a high-resolution satellite; the three-dimensional scene model can be used iteratively, so that the time cost is saved, and the detection precision can be gradually improved; and carrying equipment on the unmanned aerial vehicle to obtain sparse point cloud of the scene, further utilizing the unmanned aerial vehicle to build a three-dimensional scene model and identify a three-dimensional target, and finally obtaining an optical image by the unmanned aerial vehicle to perform target detection and identification to finish guiding and positioning the target.
3. The invention adopts a laser SLAM technology; SLAM techniques can be positioned in real time as a map; because the LIDAR can directly calculate the three-dimensional coordinates of the space points by combining the laser ranging mode with the navigation data, a large amount of time and calculation force required by dense matching of images are avoided, the LIDAR becomes a quick means for recovering a fine geometric model of a three-dimensional scene, a short plate of a visual SLAM technology is strongly supplemented, and the cost is saved; and acquiring ground control points through automatic matching of the image shot by the unmanned aerial vehicle and the existing scene model, so as to calculate the accurate pose of the photographic image.
4. The invention can still use the GNSS/IMU to navigate under the condition that the GNSS signals are not interfered, and can switch to the autonomous navigation positioning method in time when the signals are lost or unstable, and the invention relies on the image matching method to position in real time, thereby having higher flexibility and reliability.
5. When the laser device and the optical camera are installed on the unmanned aerial vehicle, the coordinate system of the laser scanning device, the coordinate system of the optical camera, the coordinate system of the navigation device and the coordinate system of the unmanned aerial vehicle body can be integrated into one coordinate system by a pre-calibration method, the optical image and the laser point cloud are sleeved to identify textures and space geometric information, more characteristic points can be captured, and the accuracy of target detection and identification is increased.
The invention can have other beneficial effects, which will be described in the detailed description by the description of specific technical features and technical solutions, and those skilled in the art should understand the beneficial technical effects brought by the technical features and technical solutions by the description of the technical features and technical solutions.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a flow chart of a visual algorithm in an embodiment of the invention.
Fig. 3 is a sparse point cloud effect diagram in an embodiment of the present invention.
Fig. 4 is a diagram of a dense point cloud effect in an embodiment of the present invention.
Fig. 5 is a schematic diagram of a scene one key point matching simulation in an embodiment of the present invention.
Fig. 6 is a schematic diagram of a key point matching simulation of a scene two in an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
Referring to fig. 1 and 2, a fused ship fine positioning method of an optical image and a laser point cloud includes:
firstly, information of a target area is acquired through an unmanned aerial vehicle carrying laser equipment and an optical camera, and a three-dimensional scene model of the target area is built by using a high-resolution satellite. If the GNSS signals are not interfered, the GNSS or the IMU is used for navigating the unmanned aerial vehicle; if the GNSS signals are interfered, the ground control points are obtained through automatic matching of the images shot by the unmanned aerial vehicle and the existing scene model, and the accurate pose of the shooting images is calculated, so that the unmanned aerial vehicle has higher flexibility and practicability;
referring to fig. 3 and 4, next, a sparse point cloud of points with three-dimensional coordinates of a target area can be acquired by using a visual SLAM technology (based on visual instant localization and mapping) in an offshore scene, and the three-dimensional scene is reconstructed; the three-dimensional coordinates of the space points are obtained directly by a laser ranging mode by using laser SLAM; and performing dense point cloud feature description by adopting SLAM laser point cloud technology.
Referring to fig. 5 and 6, texture and spatial geometry information is extracted using an optical image: and an Affine Invariance Feature (AIFE) extraction operator is adopted to solve the problem of large-dip shooting feature matching. Firstly, growing a characteristic point region, secondly, calculating a region shape parameter, then, removing a redundant scale, performing affine deformation correction after removal, performing characteristic extraction and description after correction, determining a candidate region successfully matched, and removing a matching gross error based on geometric consistency to ensure the robustness of a matching result.
Finally, establishing a laser scanning equipment coordinate system, an optical camera coordinate system, a navigation equipment coordinate system and an unmanned aerial vehicle body coordinate system, and unifying the coordinate systems into the same coordinate system by using a pre-calibration method (solving three-dimensional space coordinates of all key points in the unmanned aerial vehicle body coordinate system through a joint area network adjustment technology); and (3) combining the image and the three-dimensional point cloud to identify various static or dynamic targets in the scene for positioning, namely under the condition of live condition, acquiring the accurate coordinates of ground control points through automatic matching of the image shot by the unmanned aerial vehicle and the established scene model, and completing the interplanting solution of the optical image and the laser point cloud so as to further complete positioning.
The method is suitable for ship target detection and identification, can perform feature identification and extraction by applying fusion point cloud and multi-vision mode in offshore field scene, and further realizes target positioning; the method is mainly used for realizing the functions of identifying and positioning ships in the model and familiar with the near sea area, assisting in completing the automatic guidance of the arrival and departure of the fishing and cargo ships, and remotely realizing the rapid arrival and departure of large-scale ships; the method can quickly familiarize with offshore scenes, and a three-dimensional scene model is built in real time by a high-resolution satellite; the three-dimensional scene model can be used iteratively, so that the time cost is saved, and the detection precision can be gradually improved; and carrying equipment on the unmanned aerial vehicle to obtain sparse point cloud of the scene, further utilizing the unmanned aerial vehicle to build a three-dimensional scene model and identify a three-dimensional target, and finally obtaining an optical image by the unmanned aerial vehicle to perform target detection and identification to finish guiding and positioning the target.
The invention also provides a fused ship fine positioning system of the optical image and the laser point cloud, which can be realized by executing the flow steps of the fused ship fine positioning method of the optical image and the laser point cloud, namely, the fused ship fine positioning method of the optical image and the laser point cloud can be understood as a preferred implementation mode of the fused ship fine positioning system of the optical image and the laser point cloud by a person skilled in the art.
Specifically, a fused ship fine positioning system of an optical image and a laser point cloud comprises:
module M1: constructing a three-dimensional scene model of the target area;
module M2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds;
module M3: and establishing and unifying a coordinate system to finish positioning.
The module M2 comprises the following sub-modules:
module M2.1: acquiring sparse point clouds of a target area by adopting a visual SLAM technology, and reconstructing a three-dimensional scene;
the sparse point cloud consists of points with space three-dimensional coordinates;
module M2.2: performing dense point cloud feature description by adopting SLAM laser point cloud technology;
module M2.3: extracting texture and space geometric information from the optical image;
the optical image is acquired by an unmanned aerial vehicle-mounted optical camera.
The module M2.3 comprises employing an affine invariance feature extraction operator; the extraction comprises the steps of growing the characteristic point areas and calculating area shape parameters; performing affine deformation correction after redundant scale removal; extracting and describing characteristics, and determining candidate areas when matching is successful; and performing gross error elimination based on the matching of geometric consistency.
The coordinate system comprises a laser scanning equipment coordinate system, an optical camera coordinate system, a navigation equipment coordinate system and a body coordinate system of the unmanned aerial vehicle; the unification of the coordinate system is completed by a pre-calibration method.
The module M3 performs positioning by combining the image and the three-dimensional point cloud to identify various static or dynamic targets in the scene, thereby completing the registration and calculation of the optical image and the laser point cloud.
The related information of the target area is acquired through an unmanned aerial vehicle; the unmanned aerial vehicle is provided with laser equipment and an optical camera; the three-dimensional scene model is constructed from satellites and uses GNSS signals.
And if the GNSS signals are not interfered, using GNSS or IMU to navigate the unmanned aerial vehicle.
If the GNSS signals are interfered, ground control points are obtained through automatic matching of images shot by the unmanned aerial vehicle and the existing scene model, and the accurate pose of the shooting images is calculated.
Those skilled in the art will appreciate that the invention provides a system and its individual devices, modules, units, etc. that can be implemented entirely by logic programming of method steps, in addition to being implemented as pure computer readable program code, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units for realizing various functions included in the system can also be regarded as structures in the hardware component; means, modules, and units for implementing the various functions may also be considered as either software modules for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily without conflict.

Claims (10)

1. The fused ship fine positioning method of the optical image and the laser point cloud is characterized by comprising the following steps of:
step S1: constructing a three-dimensional scene model of the target area;
step S2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds;
step S3: and establishing and unifying a coordinate system to finish positioning.
2. The method for fine positioning of a fusion ship of an optical image and a laser point cloud according to claim 1, wherein the step S2 comprises the following sub-steps:
step S2.1: acquiring sparse point clouds of a target area by adopting a visual SLAM technology, and reconstructing a three-dimensional scene;
the sparse point cloud consists of points with space three-dimensional coordinates;
step S2.2: performing dense point cloud feature description by adopting SLAM laser point cloud technology;
step S2.3: extracting texture and space geometric information from the optical image;
the optical image is acquired by an unmanned aerial vehicle-mounted optical camera.
3. The method for fine positioning of a fusion ship of an optical image and a laser point cloud according to claim 2, wherein the step S2.3 comprises using affine invariance feature extraction operators; the extraction comprises the steps of growing the characteristic point areas and calculating area shape parameters; performing affine deformation correction after redundant scale removal; extracting and describing characteristics, and determining candidate areas when matching is successful; and performing gross error elimination based on the matching of geometric consistency.
4. The method for fine positioning of a fusion ship of an optical image and a laser point cloud according to claim 1, wherein the coordinate system comprises a laser scanning device coordinate system, an optical camera coordinate system, a navigation device coordinate system and an unmanned aerial vehicle body coordinate system; the unification of the coordinate system is completed by a pre-calibration method.
5. The method for precisely positioning the fusion ship of the optical image and the laser point cloud according to claim 1, wherein the step S3 is to perform positioning by combining the image and the three-dimensional point cloud to identify various static or dynamic targets in the scene, so as to complete the solution of the optical image and the laser point cloud.
6. The method for fine positioning of a fusion ship of an optical image and a laser point cloud according to claim 1, wherein the relevant information of the target area is acquired through an unmanned aerial vehicle; the unmanned aerial vehicle is provided with laser equipment and an optical camera; the three-dimensional scene model is constructed from satellites and uses GNSS signals.
7. The method of claim 6, wherein if the GNSS signals are not disturbed, using a GNSS or IMU to navigate the drone.
8. The method for precisely positioning the fusion ship of the optical image and the laser point cloud according to claim 6, wherein if the GNSS signals are interfered, ground control points are obtained through automatic matching of the images shot by the unmanned aerial vehicle and the existing scene model, and the precise pose of the shooting image is calculated.
9. A fused marine fine positioning system of an optical image and a laser point cloud, comprising:
module M1: constructing a three-dimensional scene model of the target area;
module M2: acquiring and processing related information of a target area; the information comprises sparse point clouds and dense point clouds;
module M3: and establishing and unifying a coordinate system to finish positioning.
10. The fused marine fine positioning system of an optical image and a laser point cloud according to claim 9, wherein said module M2 comprises the following sub-modules:
module M2.1: acquiring sparse point clouds of a target area by adopting a visual SLAM technology, and reconstructing a three-dimensional scene;
the sparse point cloud consists of points with space three-dimensional coordinates;
module M2.2: performing dense point cloud feature description by adopting SLAM laser point cloud technology;
module M2.3: extracting texture and space geometric information from the optical image;
the optical image is acquired by an unmanned aerial vehicle-mounted optical camera.
CN202311652569.1A 2023-12-04 2023-12-04 Fused ship precise positioning method and system of optical image and laser point cloud Pending CN117788572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311652569.1A CN117788572A (en) 2023-12-04 2023-12-04 Fused ship precise positioning method and system of optical image and laser point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311652569.1A CN117788572A (en) 2023-12-04 2023-12-04 Fused ship precise positioning method and system of optical image and laser point cloud

Publications (1)

Publication Number Publication Date
CN117788572A true CN117788572A (en) 2024-03-29

Family

ID=90388263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311652569.1A Pending CN117788572A (en) 2023-12-04 2023-12-04 Fused ship precise positioning method and system of optical image and laser point cloud

Country Status (1)

Country Link
CN (1) CN117788572A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118470580A (en) * 2024-07-15 2024-08-09 舟山中远海运重工有限公司 Ship part positioning method combining two-dimensional code and three-dimensional map

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118470580A (en) * 2024-07-15 2024-08-09 舟山中远海运重工有限公司 Ship part positioning method combining two-dimensional code and three-dimensional map

Similar Documents

Publication Publication Date Title
Yang et al. Concrete defects inspection and 3D mapping using CityFlyer quadrotor robot
EP2660777B1 (en) Image registration of multimodal data using 3D geoarcs
US7680300B2 (en) Visual object recognition and tracking
Guth et al. Underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach
Kanani et al. Vision based navigation for debris removal missions
CN106292656B (en) A kind of environmental modeling method and device
CN117788572A (en) Fused ship precise positioning method and system of optical image and laser point cloud
Wang et al. Acoustic camera-based pose graph slam for dense 3-d mapping in underwater environments
Dumble et al. Airborne vision-aided navigation using road intersection features
Capuano et al. Monocular-based pose determination of uncooperative known and unknown space objects
Ribeiro et al. Underwater place recognition in unknown environments with triplet based acoustic image retrieval
Jasiobedzki et al. Autonomous satellite rendezvous and docking using LIDAR and model based vision
Wu et al. Autonomous UAV landing system based on visual navigation
le Fevre Sejersen et al. Safe vessel navigation visually aided by autonomous unmanned aerial vehicles in congested harbors and waterways
CN113313824B (en) Three-dimensional semantic map construction method
Ebadi et al. Semantic mapping in unstructured environments: Toward autonomous localization of planetary robotic explorers
Wang et al. 3D-LIDAR based branch estimation and intersection location for autonomous vehicles
Koizumi et al. Development of attitude sensor using deep learning
Li et al. Driver drowsiness behavior detection and analysis using vision-based multimodal features for driving safety
Martin et al. Pioneering the Small Bodies Frontiers: The Key Enabling Technologies for Autonomous Precise Mobilitys
Kallasi et al. Object detection and pose estimation algorithms for underwater manipulation
Sikdar et al. Unconstrained Vision Guided UAV Based Safe Helicopter Landing
Shilin et al. Application of a Depth Camera for Constructing Complex Three-Dimensional Models in Multiple Scanning Complexes
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium
Phan et al. Semi-Automatic Annotation and Gathering of Marine Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination