CN112733753B - Bridge azimuth recognition method and system combining convolutional neural network and data fusion - Google Patents

Bridge azimuth recognition method and system combining convolutional neural network and data fusion Download PDF

Info

Publication number
CN112733753B
CN112733753B CN202110050504.4A CN202110050504A CN112733753B CN 112733753 B CN112733753 B CN 112733753B CN 202110050504 A CN202110050504 A CN 202110050504A CN 112733753 B CN112733753 B CN 112733753B
Authority
CN
China
Prior art keywords
layer
bridge
image
coordinate system
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110050504.4A
Other languages
Chinese (zh)
Other versions
CN112733753A (en
Inventor
徐中明
钟鸣
朱俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everclear Traffic Science Information Technology Inc (jiangsu)
Original Assignee
Everclear Traffic Science Information Technology Inc (jiangsu)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Everclear Traffic Science Information Technology Inc (jiangsu) filed Critical Everclear Traffic Science Information Technology Inc (jiangsu)
Priority to CN202110050504.4A priority Critical patent/CN112733753B/en
Publication of CN112733753A publication Critical patent/CN112733753A/en
Application granted granted Critical
Publication of CN112733753B publication Critical patent/CN112733753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a bridge azimuth recognition method and a system combining convolutional neural network and data fusion, wherein the method comprises the following steps: acquiring camera image data; judging whether a bridge exists in the image by using the trained convolutional neural network; when a bridge exists, extracting bridge image coordinates; converting the marine radar data to an image plane of the camera; matching bridge image coordinates and image coordinates of marine radar data on an image plane; the orientation of the bridge in the marine radar is calculated. The method and the system for identifying the azimuth of the bridge by combining the convolutional neural network and the data fusion judge whether the bridge exists in the image through the convolutional neural network, and convert maritime radar data into the image plane of the camera, so that the data fusion with the image coordinates of the bridge is realized, and the azimuth information of the bridge relative to the intelligent ship is obtained. Has the advantages of high accuracy and good reliability.

Description

Bridge azimuth recognition method and system combining convolutional neural network and data fusion
Technical Field
The invention relates to the technical field of intelligent ships, in particular to a bridge azimuth recognition method and system combining convolutional neural network and data fusion.
Background
With the advent of the artificial intelligence industry, intelligent vessels have been attracting attention as part of them. When the intelligent ship sails in the inland, if the river-crossing bridge is arranged on the channel, the marine radar can acquire data of the river-crossing bridge, so that a barrier transverse to the channel can appear, and therefore, the channel cannot pass only when judging from the data of the marine radar. In order to eliminate the influence, the target object on the water surface needs to be perceived, the characteristics of the object need to be identified, and the azimuth of the object needs to be judged, so that auxiliary effects such as whether obstacle avoidance is needed or whether the object is ignored can be achieved for autonomous navigation.
In the prior art, convolutional neural networks are used for identifying objects such as navigation marks, overhead objects and the like, and are not used for identifying cross-river bridges, so that the convolutional neural networks are different in structure. In addition, the prior art only recognizes the classification characteristics of objects, for example, the objects belong to navigation marks, elevated frames and the like, and cannot know the orientations of the objects, so that the intelligent ship autonomous navigation assistance effect is not great.
Disclosure of Invention
The invention aims to solve the technical problem of providing a bridge azimuth recognition method and a system with high accuracy and good reliability combining a convolutional neural network and data fusion.
In order to solve the above problems, the present invention provides a bridge azimuth recognition method combining convolutional neural network and data fusion, which includes:
acquiring camera image data;
judging whether a bridge exists in the image by using the trained convolutional neural network;
When a bridge exists, extracting bridge image coordinates;
Converting the marine radar data to an image plane of the camera;
Matching bridge image coordinates and image coordinates of marine radar data on an image plane;
The orientation of the bridge in the marine radar is calculated.
As a further improvement of the present invention, the convolutional neural network includes a Conv layer, a ReLU layer, a Pool layer, an Affine layer, a Dropout layer, and a Softmax layer, where the Conv layer is a convolutional layer for performing a convolutional operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer; the Affine layer is a full-connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons; the Softmax layer is an output layer, and a Softmax function suitable for classification problems is utilized for outputting the result of whether a cross-river bridge exists in the image.
As a further improvement of the invention, the processing and transfer of the image data between each layer is in the following order: conv layer, reLU layer, pool layer, conv layer, reLU layer, pool layer, affine layer, reLU layer, dropout layer, affine layer, dropout layer, softmax layer.
As a further improvement of the present invention, the converting maritime radar data to an image plane of a camera includes: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
As a further improvement of the present invention, marine radar data is converted to a camera coordinate system using the following formula:
Wherein P r∈R3 is a coordinate in a maritime radar coordinate system; p ε R 3, is the coordinates in the camera coordinate system; c epsilon R 3 *3 is a rotation matrix of the maritime radar coordinate system relative to the camera coordinate system; r epsilon R 3 is the displacement of the maritime radar coordinate system relative to the camera coordinate system.
As a further refinement of the invention, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein (x, y, z) T is the coordinate in the camera coordinate system, i.e., P; (x n,yn,1)T is a homogeneous version of the normalized image coordinates on the image plane, d x and d y are the displacement of the image plane center point relative to the image plane origin.
In order to solve the above problems, the present invention further provides a bridge azimuth recognition system combining convolutional neural network and data fusion, which includes:
The image acquisition module is used for acquiring camera image data;
The judging module is used for judging whether a bridge exists in the image by using the trained convolutional neural network;
the coordinate extraction module is used for extracting bridge image coordinates when a bridge exists;
the data conversion module is used for converting maritime radar data into an image plane of the camera;
The matching module is used for matching the bridge image coordinates and the image coordinates of the marine radar data on the image plane;
and the azimuth calculation module is used for calculating the azimuth of the bridge in the maritime radar.
As a further improvement of the present invention, the converting maritime radar data to an image plane of a camera includes: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
As a further improvement of the present invention, marine radar data is converted to a camera coordinate system using the following formula:
Wherein P r∈R3 is a coordinate in a maritime radar coordinate system; p ε R 3, is the coordinates in the camera coordinate system; c epsilon R 3 *3 is a rotation matrix of the maritime radar coordinate system relative to the camera coordinate system; r epsilon R 3 is the displacement of the maritime radar coordinate system relative to the camera coordinate system.
As a further refinement of the invention, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein (x, y, z) T is the coordinate in the camera coordinate system, i.e., P; (x n,yn,1)T is a homogeneous version of the normalized image coordinates on the image plane, d x and d y are the displacement of the image plane center point relative to the image plane origin.
The invention has the beneficial effects that:
The method and the system for identifying the azimuth of the bridge by combining the convolutional neural network and the data fusion judge whether the bridge exists in the image through the convolutional neural network, and convert maritime radar data into the image plane of the camera, so that the data fusion with the image coordinates of the bridge is realized, and the azimuth information of the bridge relative to the intelligent ship is obtained. Has the advantages of high accuracy and good reliability.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention, as well as the preferred embodiments thereof, together with the following detailed description of the invention, given by way of illustration only, together with the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a bridge azimuth recognition method combining convolutional neural network and data fusion in a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of the sequence of processing and transferring image data between each layer in a preferred embodiment of the present invention;
fig. 3 is a schematic diagram showing the relationship between coordinate systems in a front projection model of a camera according to a preferred embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Referring to fig. 1, a bridge azimuth recognition method combining convolutional neural network and data fusion in a preferred embodiment of the invention comprises the following steps:
s10, acquiring camera image data.
S20, judging whether a bridge exists in the image by using the trained convolutional neural network. Wherein, the bridge comprises a cross-river bridge, a cross-sea bridge and the like.
The construction of the convolutional neural network is suitable for identifying a bridge in an image, and optionally, the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an Affine layer, a Dropout layer and a Softmax layer, wherein the Conv layer is a convolutional layer and is used for performing convolution operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer and is used for simplifying processing and reducing the size of a storage space; the Affine layer is a full-connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons and preventing overfitting; the Softmax layer is an output layer, and a Softmax function suitable for classification problems is utilized for outputting the result of whether a cross-river bridge exists in the image.
As shown in fig. 2, the processing and transferring sequence of the image data between each layer is optional: conv layer, reLU layer, pool layer, conv layer, reLU layer, pool layer, affine layer, reLU layer, dropout layer, affine layer, dropout layer, softmax layer.
Optionally, when the convolutional neural network optimizes parameters, an Adam method is used, so that the search of a parameter space can be improved, and the parameters can be converged as soon as possible.
S30, when the bridge exists, extracting the bridge image coordinates. Otherwise, the process returns to step S10.
S40, converting the maritime radar data to an image plane of the camera. Comprising the following steps: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane. Alternatively, a front projection model of the camera is used for the conversion, see fig. 3.
Optionally, the marine radar data is converted to a camera coordinate system using the following formula:
Wherein P r∈R3 is a coordinate in a maritime radar coordinate system; p ε R 3, is the coordinates in the camera coordinate system; c epsilon R 3 *3 is a rotation matrix of the maritime radar coordinate system relative to the camera coordinate system; r epsilon R 3 is the displacement of the maritime radar coordinate system relative to the camera coordinate system.
Optionally, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein (x, y, z) T is the coordinate in the camera coordinate system, i.e., P; (x n,yn,1)T is a homogeneous version of the normalized image coordinates on the image plane, d x and d y are the displacement of the image plane center point relative to the image plane origin.
And S50, matching the bridge image coordinates and the image coordinates of the marine radar data on the image plane. Alternatively, the matching method is a nearby matching, searching for marine radar data around the image coordinates of the bridge.
S60, calculating the azimuth of the bridge in the maritime radar. The orientation of the bridge relative to the intelligent ship can be obtained.
The preferred embodiment of the invention also discloses a bridge azimuth recognition system combining the convolutional neural network and the data fusion, which comprises an image acquisition module, a judgment module, a coordinate extraction module, a data conversion module, a matching module and an azimuth calculation module.
The image acquisition module is used for acquiring camera image data.
The judging module is used for judging whether a bridge exists in the image by using the trained convolutional neural network; wherein, the bridge comprises a cross-river bridge, a cross-sea bridge and the like.
The construction of the convolutional neural network is suitable for identifying a bridge in an image, and optionally, the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an Affine layer, a Dropout layer and a Softmax layer, wherein the Conv layer is a convolutional layer and is used for performing convolution operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer and is used for simplifying processing and reducing the size of a storage space; the Affine layer is a full-connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons and preventing overfitting; the Softmax layer is an output layer, and a Softmax function suitable for classification problems is utilized for outputting the result of whether a cross-river bridge exists in the image.
As shown in fig. 2, the processing and transferring sequence of the image data between each layer is optional: conv layer, reLU layer, pool layer, conv layer, reLU layer, pool layer, affine layer, reLU layer, dropout layer, affine layer, dropout layer, softmax layer.
Optionally, when the convolutional neural network optimizes parameters, an Adam method is used, so that the search of a parameter space can be improved, and the parameters can be converged as soon as possible.
The coordinate extraction module is used for extracting bridge image coordinates when a bridge exists. Otherwise, continuing to acquire the camera image data through the image acquisition module.
The data conversion module is used for converting the maritime radar data to an image plane of the camera. Comprising the following steps: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane. Alternatively, a front projection model of the camera is used for the conversion, see fig. 3.
Optionally, the marine radar data is converted to a camera coordinate system using the following formula:
Wherein P r∈R3 is a coordinate in a maritime radar coordinate system; p ε R 3, is the coordinates in the camera coordinate system; c epsilon R 3 *3 is a rotation matrix of the maritime radar coordinate system relative to the camera coordinate system; r epsilon R 3 is the displacement of the maritime radar coordinate system relative to the camera coordinate system.
Optionally, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein (x, y, z) T is the coordinate in the camera coordinate system, i.e., P; (x n,yn,1)T is a homogeneous version of the normalized image coordinates on the image plane, d x and d y are the displacement of the image plane center point relative to the image plane origin.
The matching module is used for matching the bridge image coordinates and the image coordinates of the marine radar data on the image plane. Alternatively, the matching method is a nearby matching, searching for marine radar data around the image coordinates of the bridge.
The azimuth calculation module is used for calculating the azimuth of the bridge in the maritime radar. The orientation of the bridge relative to the intelligent ship can be obtained.
The method and the system for identifying the azimuth of the bridge by combining the convolutional neural network and the data fusion judge whether the bridge exists in the image through the convolutional neural network, and convert maritime radar data into the image plane of the camera, so that the data fusion with the image coordinates of the bridge is realized, and the azimuth information of the bridge relative to the intelligent ship is obtained. Has the advantages of high accuracy and good reliability.
The above embodiments are merely preferred embodiments for fully explaining the present invention, and the scope of the present invention is not limited thereto. Equivalent substitutions and modifications will occur to those skilled in the art based on the present invention, and are intended to be within the scope of the present invention. The protection scope of the invention is subject to the claims.

Claims (2)

1. The bridge azimuth recognition method combining convolutional neural network and data fusion is characterized by comprising the following steps of:
acquiring camera image data;
judging whether a bridge exists in the image by using the trained convolutional neural network;
When a bridge exists, extracting bridge image coordinates;
Converting the marine radar data to an image plane of the camera;
Matching bridge image coordinates and image coordinates of marine radar data on an image plane;
The orientation of the bridge in the marine radar is calculated,
The convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an Affine layer, a Dropout layer and a Softmax layer, wherein the Conv layer is a convolutional layer and is used for performing convolutional operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer; the Affine layer is a full-connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons; the Softmax layer is an output layer, and is used for outputting whether a river-crossing bridge result exists in the image by utilizing a Softmax function suitable for classification problems, and the processing and transferring sequence of the image data between each layer is as follows: conv layer, reLU layer, pool layer, conv layer, reLU layer, pool layer, affine layer, reLU layer, dropout layer, affine layer, dropout layer, softmax layer;
The converting maritime radar data to an image plane of a camera, comprising: converting the maritime radar data to a camera coordinate system, and converting the maritime radar data from the camera coordinate system to an image plane;
The marine radar data is converted to a camera coordinate system using the following formula:
wherein/> Is a coordinate in a maritime radar coordinate system; /(I)Is a coordinate in the camera coordinate system; /(I)Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; /(I)The displacement of the maritime radar coordinate system relative to the camera coordinate system;
Marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein/> Is the coordinate in the camera coordinate system, i.e., P; /(I)Is a homogeneous form of normalized image coordinates on the image plane,/>And/>Is the displacement of the image plane center point relative to the image plane origin.
2. The bridge azimuth recognition system combining convolutional neural network and data fusion is characterized by comprising:
The image acquisition module is used for acquiring camera image data;
The judging module is used for judging whether a bridge exists in the image or not by utilizing a trained convolutional neural network, wherein the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an Affine layer, a Dropout layer and a Softmax layer, and the Conv layer is a convolutional layer and is used for carrying out convolution operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer; the Affine layer is a full-connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons; the Softmax layer is an output layer, and is used for outputting whether a river-crossing bridge result exists in the image by utilizing a Softmax function suitable for classification problems, and the processing and transferring sequence of the image data between each layer is as follows: conv layer, reLU layer, pool layer, conv layer, reLU layer, pool layer, affine layer, reLU layer, dropout layer, affine layer, dropout layer, softmax layer;
the coordinate extraction module is used for extracting bridge image coordinates when a bridge exists;
the data conversion module is used for converting maritime radar data into an image plane of the camera;
The matching module is used for matching the bridge image coordinates and the image coordinates of the marine radar data on the image plane;
The azimuth calculation module is used for calculating the azimuth of the bridge in the maritime radar;
The converting maritime radar data to an image plane of a camera, comprising: converting the maritime radar data to a camera coordinate system, and converting the maritime radar data from the camera coordinate system to an image plane;
The marine radar data is converted to a camera coordinate system using the following formula:
wherein/> Is a coordinate in a maritime radar coordinate system; /(I)Is a coordinate in the camera coordinate system; /(I)Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; /(I)The displacement of the maritime radar coordinate system relative to the camera coordinate system;
Marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein/> Is the coordinate in the camera coordinate system, i.e., P; /(I)Is a homogeneous form of normalized image coordinates on the image plane,/>And/>Is the displacement of the image plane center point relative to the image plane origin.
CN202110050504.4A 2021-01-14 2021-01-14 Bridge azimuth recognition method and system combining convolutional neural network and data fusion Active CN112733753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110050504.4A CN112733753B (en) 2021-01-14 2021-01-14 Bridge azimuth recognition method and system combining convolutional neural network and data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110050504.4A CN112733753B (en) 2021-01-14 2021-01-14 Bridge azimuth recognition method and system combining convolutional neural network and data fusion

Publications (2)

Publication Number Publication Date
CN112733753A CN112733753A (en) 2021-04-30
CN112733753B true CN112733753B (en) 2024-04-30

Family

ID=75593139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110050504.4A Active CN112733753B (en) 2021-01-14 2021-01-14 Bridge azimuth recognition method and system combining convolutional neural network and data fusion

Country Status (1)

Country Link
CN (1) CN112733753B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system
US10408939B1 (en) * 2019-01-31 2019-09-10 StradVision, Inc. Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
US10408939B1 (en) * 2019-01-31 2019-09-10 StradVision, Inc. Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system

Also Published As

Publication number Publication date
CN112733753A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN111626217B (en) Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
CN109145747B (en) Semantic segmentation method for water surface panoramic image
Lee et al. Image-based ship detection and classification for unmanned surface vehicle using real-time object detection neural networks
CN109001725B (en) Offshore unmanned ship offshore multi-target tracking method
CN112711995A (en) Image-based marine target identification method
CN111985475A (en) Ship board identification method, computing device and storage medium
US20220342427A1 (en) Information processing device, information processing method, and program
CN112069910A (en) Method for detecting multi-direction ship target by remote sensing image
CN111984012A (en) Unmanned ship autonomous obstacle avoidance and air route planning method
CN112347218A (en) Unmanned ship environment map generation method and unmanned ship sensing system
Cheng et al. Water target recognition method and application for unmanned surface vessels
CN113033572A (en) Obstacle segmentation network based on USV and generation method thereof
CN110472451B (en) Monocular camera-based artificial landmark oriented to AGV positioning and calculating method
CN116994135A (en) Ship target detection method based on vision and radar fusion
CN114529821A (en) Offshore wind power safety monitoring and early warning method based on machine vision
CN114627160A (en) Underwater environment detection method
CN112733753B (en) Bridge azimuth recognition method and system combining convolutional neural network and data fusion
CN114049362A (en) Transform-based point cloud instance segmentation method
CN114359493B (en) Method and system for generating three-dimensional semantic map for unmanned ship
Zhang et al. Depth Monocular Estimation with Attention-based Encoder-Decoder Network from Single Image
CN114445572A (en) Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area
Qu et al. Multi-Task Learning-Enabled Automatic Vessel Draft Reading for Intelligent Maritime Surveillance
CN116106899B (en) Port channel small target identification method based on machine learning
Xu et al. An overview of robust maritime situation awareness methods
CN113408353B (en) Real-time obstacle avoidance system based on RGB-D

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant