CN112733753A - Bridge orientation identification method and system combining convolutional neural network and data fusion - Google Patents
Bridge orientation identification method and system combining convolutional neural network and data fusion Download PDFInfo
- Publication number
- CN112733753A CN112733753A CN202110050504.4A CN202110050504A CN112733753A CN 112733753 A CN112733753 A CN 112733753A CN 202110050504 A CN202110050504 A CN 202110050504A CN 112733753 A CN112733753 A CN 112733753A
- Authority
- CN
- China
- Prior art keywords
- layer
- bridge
- image
- coordinate system
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000004927 fusion Effects 0.000 title claims abstract description 26
- 238000006073 displacement reaction Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- PXFBZOLANLWPMH-SMLHJDAJSA-N Affinine Chemical compound C1C(C2=CC=CC=C2N2)=C2C(=O)C[C@@H]2C(=C/C)\CN(C)[C@H]1C2CO PXFBZOLANLWPMH-SMLHJDAJSA-N 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 2
- 230000006872 improvement Effects 0.000 description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a bridge azimuth identification method and a system combining a convolutional neural network and data fusion, wherein the method comprises the following steps: acquiring camera image data; judging whether a bridge exists in the image by using the trained convolutional neural network; when a bridge exists, extracting the image coordinates of the bridge; converting the marine radar data to an image plane of the camera; matching the image coordinates of the bridge and the image coordinates of the maritime radar data on an image plane; the orientation of the bridge in the marine radar is calculated. The method and the system for identifying the orientation of the bridge, which are combined with the convolutional neural network and the data fusion, judge whether the bridge exists in the image or not through the convolutional neural network, convert the marine radar data into the image plane of the camera, realize the data fusion with the image coordinates of the bridge, and further solve the orientation information of the bridge relative to the intelligent ship. The method has the advantages of high accuracy and good reliability.
Description
Technical Field
The invention relates to the technical field of intelligent ships, in particular to a bridge orientation identification method and system combining a convolutional neural network and data fusion.
Background
With the rise of the artificial intelligence industry, smart ships have attracted attention as a part of the industry. When the intelligent ship sails in an inland river, if a river-crossing bridge exists on a channel, the maritime radar can acquire data of the river-crossing bridge, so that a barrier crossing the channel can appear, and therefore the channel can not pass only through data judgment of the maritime radar. In order to eliminate the influence, a target object on the water surface needs to be sensed, the characteristics of the object need to be recognized, and the direction of the object also needs to be judged, so that whether obstacle avoidance is needed or not for autonomous navigation or auxiliary effects such as neglecting the object can be achieved.
In the prior art, objects such as navigation marks, elevated bridges and the like are identified by using a convolutional neural network, and are not used for identifying a cross-river bridge, so that the convolutional neural network has different structures. In addition, in the prior art, only classification characteristics of objects are identified, for example, the objects belong to navigation marks, high frames and the like, and the directions of the objects cannot be known, so that the auxiliary effect on autonomous navigation of the intelligent ship is not large.
Disclosure of Invention
The invention aims to provide a bridge position identification method and system combining a convolutional neural network and data fusion, which are high in accuracy and good in reliability.
In order to solve the above problems, the present invention provides a bridge orientation identification method combining a convolutional neural network and data fusion, which includes:
acquiring camera image data;
judging whether a bridge exists in the image by using the trained convolutional neural network;
when a bridge exists, extracting the image coordinates of the bridge;
converting the marine radar data to an image plane of the camera;
matching the image coordinates of the bridge and the image coordinates of the maritime radar data on an image plane;
the orientation of the bridge in the marine radar is calculated.
As a further improvement of the present invention, the convolutional neural network includes a Conv layer, a ReLU layer, a Pool layer, an affinity layer, a Dropout layer, and a Softmax layer, and the Conv layer is a convolutional layer and is used for performing convolution operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer; the Affinine layer is a full connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for deleting a certain number of neurons randomly; the Softmax layer is an output layer and is used for outputting a result of whether the river-crossing bridge exists in the image or not by utilizing a Softmax function suitable for the classification problem.
As a further improvement of the invention, the processing and transferring order of the image data between each layer is as follows: a Conv layer, a ReLU layer, a Pool layer, a Conv layer, a ReLU layer, a Pool layer, an Affinie layer, a ReLU layer, a Dropout layer, an Affinie layer, a Dropout layer, a Softmax layer.
As a further improvement of the invention, the converting of the marine radar data to an image plane of the camera comprises: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
As a further improvement of the invention, the maritime radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3 *3Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
As a further development of the invention, the maritime radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
In order to solve the above problem, the present invention further provides a bridge orientation identification system combining a convolutional neural network and data fusion, which includes:
the image acquisition module is used for acquiring camera image data;
the judging module is used for judging whether a bridge exists in the image by utilizing the trained convolutional neural network;
the coordinate extraction module is used for extracting the image coordinates of the bridge when the bridge exists;
the data conversion module is used for converting the maritime radar data into an image plane of the camera;
the matching module is used for matching the image coordinates of the bridge and the image coordinates of the marine radar data on the image plane;
and the direction calculation module is used for calculating the direction of the bridge in the marine radar.
As a further improvement of the invention, the converting of the marine radar data to an image plane of the camera comprises: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
As a further improvement of the invention, the maritime radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3 *3Is the rotation of the coordinate system of the marine radar relative to the coordinate system of the cameraA matrix; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
As a further development of the invention, the maritime radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
The invention has the beneficial effects that:
the method and the system for identifying the orientation of the bridge, which are combined with the convolutional neural network and the data fusion, judge whether the bridge exists in the image or not through the convolutional neural network, convert the marine radar data into the image plane of the camera, realize the data fusion with the image coordinates of the bridge, and further solve the orientation information of the bridge relative to the intelligent ship. The method has the advantages of high accuracy and good reliability.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a bridge orientation identification method incorporating a convolutional neural network and data fusion in a preferred embodiment of the present invention;
FIG. 2 is a schematic representation of the sequence of processing and transferring image data between each layer in a preferred embodiment of the present invention;
fig. 3 is a schematic diagram of the relationship between coordinate systems under the camera front projection model in the preferred embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
As shown in fig. 1, a method for identifying orientation of bridge by combining convolutional neural network and data fusion in the preferred embodiment of the present invention includes the following steps:
and S10, acquiring camera image data.
And S20, judging whether the bridge exists in the image by using the trained convolutional neural network. Wherein, the bridge comprises a river-crossing bridge, a sea-crossing bridge and the like.
The structure of the convolutional neural network is suitable for identifying a bridge in an image, optionally, the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an affinity layer, a Dropout layer and a Softmax layer, wherein the Conv layer is a convolutional layer and is used for performing convolutional operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is used for simplifying processing and reducing the size of a storage space; the Affinine layer is a full connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons for preventing overfitting; the Softmax layer is an output layer and is used for outputting a result of whether the river-crossing bridge exists in the image or not by utilizing a Softmax function suitable for the classification problem.
As shown in fig. 2, optionally, the order of processing and transferring the image data between each layer is as follows: a Conv layer, a ReLU layer, a Pool layer, a Conv layer, a ReLU layer, a Pool layer, an Affinie layer, a ReLU layer, a Dropout layer, an Affinie layer, a Dropout layer, a Softmax layer.
Optionally, an Adam method is used when the convolutional neural network optimizes parameters, so that the search of a parameter space can be improved, and the parameters can be converged as soon as possible.
And S30, when the bridge exists, extracting the bridge image coordinates. Otherwise, return to step S10.
S40, converting the marine radar data to an image plane of the camera. The method comprises the following steps: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane. Optionally, a front projection model of the camera is used in the conversion, see fig. 3.
Optionally, the maritime radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3 *3Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
Optionally, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
And S50, matching the image coordinates of the bridge and the image coordinates of the marine radar data on the image plane. Optionally, the matching method is a near matching, and maritime radar data is searched around the image coordinates of the bridge.
And S60, calculating the orientation of the bridge in the marine radar. The direction of the bridge relative to the intelligent ship can be obtained.
The invention also discloses a bridge orientation identification system combining the convolutional neural network and the data fusion, which comprises an image acquisition module, a judgment module, a coordinate extraction module, a data conversion module, a matching module and an orientation calculation module.
The image acquisition module is used for acquiring camera image data.
The judging module is used for judging whether a bridge exists in the image by utilizing the trained convolutional neural network; wherein, the bridge comprises a river-crossing bridge, a sea-crossing bridge and the like.
The structure of the convolutional neural network is suitable for identifying a bridge in an image, optionally, the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an affinity layer, a Dropout layer and a Softmax layer, wherein the Conv layer is a convolutional layer and is used for performing convolutional operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is used for simplifying processing and reducing the size of a storage space; the Affinine layer is a full connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for randomly deleting a certain number of neurons for preventing overfitting; the Softmax layer is an output layer and is used for outputting a result of whether the river-crossing bridge exists in the image or not by utilizing a Softmax function suitable for the classification problem.
As shown in fig. 2, optionally, the order of processing and transferring the image data between each layer is as follows: a Conv layer, a ReLU layer, a Pool layer, a Conv layer, a ReLU layer, a Pool layer, an Affinie layer, a ReLU layer, a Dropout layer, an Affinie layer, a Dropout layer, a Softmax layer.
Optionally, an Adam method is used when the convolutional neural network optimizes parameters, so that the search of a parameter space can be improved, and the parameters can be converged as soon as possible.
The coordinate extraction module is used for extracting the image coordinates of the bridge when the bridge exists. And if not, continuously acquiring the camera image data through the image acquisition module.
The data conversion module is used for converting the marine radar data into an image plane of the camera. The method comprises the following steps: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane. Optionally, a front projection model of the camera is used in the conversion, see fig. 3.
Optionally, the maritime radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3 *3Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
Optionally, the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
The matching module is used for matching the image coordinates of the bridge and the image coordinates of the marine radar data on the image plane. Optionally, the matching method is a near matching, and maritime radar data is searched around the image coordinates of the bridge.
The direction calculation module is used for calculating the direction of the bridge in the marine radar. The direction of the bridge relative to the intelligent ship can be obtained.
The method and the system for identifying the orientation of the bridge, which are combined with the convolutional neural network and the data fusion, judge whether the bridge exists in the image or not through the convolutional neural network, convert the marine radar data into the image plane of the camera, realize the data fusion with the image coordinates of the bridge, and further solve the orientation information of the bridge relative to the intelligent ship. The method has the advantages of high accuracy and good reliability.
The above embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.
Claims (10)
1. The bridge orientation identification method combining the convolutional neural network and the data fusion is characterized by comprising the following steps of:
acquiring camera image data;
judging whether a bridge exists in the image by using the trained convolutional neural network;
when a bridge exists, extracting the image coordinates of the bridge;
converting the marine radar data to an image plane of the camera;
matching the image coordinates of the bridge and the image coordinates of the maritime radar data on an image plane;
the orientation of the bridge in the marine radar is calculated.
2. The bridge orientation identification method combining the convolutional neural network and the data fusion as claimed in claim 1, wherein the convolutional neural network comprises a Conv layer, a ReLU layer, a Pool layer, an Affinie layer, a Dropout layer and a Softmax layer, and the Conv layer is a convolution layer for performing convolution operation on image data; the ReLU layer is an activation function layer using a ReLU function; the Pool layer is a pooling layer; the Affinine layer is a full connection layer and is used for forward and backward propagation calculation; the Dropout layer is used for deleting a certain number of neurons randomly; the Softmax layer is an output layer and is used for outputting a result of whether the river-crossing bridge exists in the image or not by utilizing a Softmax function suitable for the classification problem.
3. The bridge orientation identification method combining convolutional neural network and data fusion of claim 2, wherein the image data is processed and transferred between each layer in the following sequence: a Conv layer, a ReLU layer, a Pool layer, a Conv layer, a ReLU layer, a Pool layer, an Affinie layer, a ReLU layer, a Dropout layer, an Affinie layer, a Dropout layer, a Softmax layer.
4. The bridge orientation identification method in combination with convolutional neural network and data fusion of claim 1, wherein said converting marine radar data to an image plane of a camera comprises: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
5. The bridge orientation identification method combining convolutional neural network and data fusion of claim 4, wherein the marine radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3*3Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
6. The bridge orientation identification method combining convolutional neural network and data fusion of claim 5, wherein the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
7. Big bridge position identification system who combines convolutional neural network and data fusion, its characterized in that includes:
the image acquisition module is used for acquiring camera image data;
the judging module is used for judging whether a bridge exists in the image by utilizing the trained convolutional neural network;
the coordinate extraction module is used for extracting the image coordinates of the bridge when the bridge exists;
the data conversion module is used for converting the maritime radar data into an image plane of the camera;
the matching module is used for matching the image coordinates of the bridge and the image coordinates of the marine radar data on the image plane;
and the direction calculation module is used for calculating the direction of the bridge in the marine radar.
8. The bridge orientation recognition system combining convolutional neural network and data fusion of claim 7, wherein said converting marine radar data to an image plane of a camera comprises: the marine radar data is converted to a camera coordinate system and then converted from the camera coordinate system to an image plane.
9. The bridge orientation recognition system combining convolutional neural network and data fusion of claim 8, wherein the marine radar data is converted to the camera coordinate system using the following formula:
wherein, Pr∈R3Coordinates in the maritime radar coordinate system; p is belonged to R3Are coordinates in the camera coordinate system; c is belonged to R3*3Is a rotation matrix of the marine radar coordinate system relative to the camera coordinate system; r is formed by R3The displacement of the marine radar coordinate system relative to the camera coordinate system.
10. The bridge orientation recognition system in combination with a convolutional neural network and data fusion of claim 9, wherein the marine radar data is converted from the camera coordinate system to the image plane using the following formula:
wherein, (x, y, z)TIs the coordinate in the camera coordinate system, i.e. P; (x)n,yn,1)TIs a homogeneous form of the normalized image coordinates on the image plane, dxAnd dyIs the displacement of the center point of the image plane from the origin of the image plane.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110050504.4A CN112733753B (en) | 2021-01-14 | 2021-01-14 | Bridge azimuth recognition method and system combining convolutional neural network and data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110050504.4A CN112733753B (en) | 2021-01-14 | 2021-01-14 | Bridge azimuth recognition method and system combining convolutional neural network and data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112733753A true CN112733753A (en) | 2021-04-30 |
CN112733753B CN112733753B (en) | 2024-04-30 |
Family
ID=75593139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110050504.4A Active CN112733753B (en) | 2021-01-14 | 2021-01-14 | Bridge azimuth recognition method and system combining convolutional neural network and data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733753B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
CN110188696A (en) * | 2019-05-31 | 2019-08-30 | 华南理工大学 | A kind of water surface is unmanned to equip multi-source cognitive method and system |
US10408939B1 (en) * | 2019-01-31 | 2019-09-10 | StradVision, Inc. | Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same |
-
2021
- 2021-01-14 CN CN202110050504.4A patent/CN112733753B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (en) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion |
US10408939B1 (en) * | 2019-01-31 | 2019-09-10 | StradVision, Inc. | Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same |
CN110188696A (en) * | 2019-05-31 | 2019-08-30 | 华南理工大学 | A kind of water surface is unmanned to equip multi-source cognitive method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112733753B (en) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111626217B (en) | Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion | |
Lee et al. | Image-based ship detection and classification for unmanned surface vehicle using real-time object detection neural networks | |
Shan et al. | SiamFPN: A deep learning method for accurate and real-time maritime ship tracking | |
CN111913406B (en) | Ship-shore collaborative simulation system for intelligent navigation and safety of ship | |
Schöller et al. | Assessing deep-learning methods for object detection at sea from LWIR images | |
Zhang et al. | Survey on Deep Learning‐Based Marine Object Detection | |
Zhang et al. | A object detection and tracking method for security in intelligence of unmanned surface vehicles | |
Qiao et al. | Marine vessel re-identification: A large-scale dataset and global-and-local fusion-based discriminative feature learning | |
Cheng et al. | Water target recognition method and application for unmanned surface vessels | |
CN113987251A (en) | Method, system, equipment and storage medium for establishing ship face characteristic database | |
CN110472451B (en) | Monocular camera-based artificial landmark oriented to AGV positioning and calculating method | |
CN112001272A (en) | Laser radar environment sensing method and system based on deep learning | |
CN114529821A (en) | Offshore wind power safety monitoring and early warning method based on machine vision | |
Jha et al. | Autonomous mooring towards autonomous maritime navigation and offshore operations | |
CN114049362A (en) | Transform-based point cloud instance segmentation method | |
CN111949034B (en) | Unmanned ship autonomous navigation system | |
CN112733753A (en) | Bridge orientation identification method and system combining convolutional neural network and data fusion | |
Zhou et al. | A real-time scene parsing network for autonomous maritime transportation | |
Yang et al. | A Joint Ship Detection and Waterway Segmentation Method for Environment-Aware of USVs in Canal Waterways | |
Saini et al. | Machine learning approach for detection of track assets for railroad health monitoring with drone images | |
Dong et al. | Accurate and real-time visual detection algorithm for environmental perception of USVS under all-weather conditions | |
CN114445572A (en) | Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area | |
CN114359493A (en) | Method and system for generating three-dimensional semantic map for unmanned ship | |
Gao et al. | Vehicle detection in high resolution image based on deep learning | |
Xu et al. | An overview of robust maritime situation awareness methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |