CN109613559B - Device and method for distinguishing water-land boundary floaters based on vision and laser radar - Google Patents

Device and method for distinguishing water-land boundary floaters based on vision and laser radar Download PDF

Info

Publication number
CN109613559B
CN109613559B CN201811547930.3A CN201811547930A CN109613559B CN 109613559 B CN109613559 B CN 109613559B CN 201811547930 A CN201811547930 A CN 201811547930A CN 109613559 B CN109613559 B CN 109613559B
Authority
CN
China
Prior art keywords
module
unmanned ship
image
laser radar
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811547930.3A
Other languages
Chinese (zh)
Other versions
CN109613559A (en
Inventor
张霖
赵林坤
田劭宇
肖怀前
钱邦永
骆敏舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU HUAISHU XINHE ADMINISTRATION
Changzhou Campus of Hohai University
Original Assignee
JIANGSU HUAISHU XINHE ADMINISTRATION
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU HUAISHU XINHE ADMINISTRATION, Changzhou Campus of Hohai University filed Critical JIANGSU HUAISHU XINHE ADMINISTRATION
Priority to CN201811547930.3A priority Critical patent/CN109613559B/en
Publication of CN109613559A publication Critical patent/CN109613559A/en
Application granted granted Critical
Publication of CN109613559B publication Critical patent/CN109613559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a device for detecting a boundary between a water surface floater and a land based on vision and laser radar, which comprises a data acquisition layer, a processing and judging layer and a communication interface layer; the data acquisition layer: comprises a laser radar, a vision system and a vision processing SOC; the processing discrimination layer includes: the system comprises an MCU, a human-computer interaction module, a pose measurement module, an image analysis processing module and a deep neural network training module; the communication interface layer comprises an Ethernet SOC, a Powerlink module and a CAN module. The invention provides a device and a method for detecting a boundary between a water surface floater and a land based on vision and a laser radar, so that an unmanned ship can accurately judge the boundary between the water surface with the floater and the land or a river bank, can detect the distance between the bottom of the ship and a river bed at the moment, and can avoid dangers such as grounding and the like in time.

Description

Device and method for distinguishing water-land boundary floaters based on vision and laser radar
Technical Field
The invention relates to a device and a method for distinguishing a water surface floater from a land boundary based on vision and a laser radar, and belongs to the technical field of state monitoring of industrial intelligent equipment.
Background
The unmanned ship for cleaning the water surface garbage is a special device which does not need manual operation, is integrated with self navigation, detection, positioning and monitoring, and is used for collecting and cleaning the water surface garbage. Therefore, the detection and the judgment are urgently needed and necessary so as to improve the fluency and the safety of unmanned ship operation.
Disclosure of Invention
In order to solve the problems, the invention provides a device and a method for detecting the boundary between a water surface floater and land based on vision and laser radar, so that an unmanned ship can accurately judge the boundary between the water surface with the floater and the land or a river bank, can detect the distance between the bottom of the ship and a river bed at the moment, and can avoid dangers such as grounding and the like in time. The detection device is connected with the unmanned ship through the interface to control the rudder propeller, so that maintenance personnel can conveniently maintain and program the unmanned ship body and the detection device, and can conveniently detect, analyze and replace faults of the laser radar, the vision system and the detection device.
The technical scheme of the invention is as follows:
a detection device for detecting the boundary between a water surface floater and a land based on vision and laser radar comprises a data acquisition layer, a processing and judging layer and a communication interface layer;
the data acquisition layer: comprises a laser radar, a vision system and a vision processing SOC; the visual SOC realizes the time synchronization of the image data of the visual system and the point cloud data scanned by the laser radar through broadcasting network absolute time and relative time; the visual system and the laser radar scan and shoot the boundary area of the same river bank and the water surface at the same time, and the cloud data and the visual image of the river bank, algae and garbage are collected; an image preprocessing program is built in the visual SOC, and the image is smoothed and the image edge is reserved by adding anisotropic diffusion filtering, which is the existing mature algorithm;
the processing discrimination layer includes: the system comprises an MCU, a human-computer interaction module, a pose measurement module, an image analysis processing module and a deep neural network training module; the MCU is communicated with the pose measuring module through an SPI communication bus, the MCU is communicated with the human-computer interaction module through an RS485, and the deep neural network training module and the image analysis processing module directly exchange data in an MCU memory through a DMA;
the man-machine interaction module is used for providing manual remote control, display and audio output functions, and is used for partial setting and display work during initial installation and work of landing and harboring of the unmanned ship when the work is finished;
the pose measurement module establishes a station center rectangular coordinate system by taking a laser radar as an origin, records course deflection information of the unmanned ship by recording output data of navigation software, fits an approximate space plane equation of the river bank at the moment by a least square method through a space coordinate system and a point cloud coordinate of the river bank collected by the laser radar, obtains the vertical distance from the ship to the river bank by a point-to-surface distance formula, avoids the unmanned ship from being stranded due to being too close to the river bank, and uses the data of the pose measurement module for a deep neural network training module training model;
the image analysis processing module can acquire the feature points of the image of the data acquisition layer through the existing SIFT algorithm, generate feature point description vectors for all the feature points and transmit the feature points of the visual image and the visual image to the deep neural network training model;
the deep neural network training module carries out evolution training on the visual image and the visual image characteristic parameters by using the existing genetic algorithm as input and course deflection information at the same time as the input data as output;
the communication interface layer is used for providing a communication interface for data transmission by a device; the communication interface layer comprises an Ethernet SOC, a Powerlink module and a CAN module.
A detection method of a boundary between a water surface floater and a land based on vision and laser radar utilizes the device and is characterized by comprising the following steps:
(1) determining the distance between a laser radar irradiation area and a bow according to the structure and the draft of the unmanned ship, and determining the early warning distance at the junction between the ship and the land and water according to the draft of the unmanned ship and the structure of a river channel;
(2) setting the irradiation direction and the irradiation angle of a current vision system and a laser radar through a man-machine interaction module, wherein the irradiation direction of the radar, the shooting direction of the vision system, the direction of the bow and the advancing direction of the unmanned ship are coplanar;
(3) after the device is installed, placing the unmanned ship in water, and entering a device calibration stage;
(4) firstly, acquiring the following special water surface visual characteristic images and radar scanning point cloud data by a manually operated unmanned ship: urban river bank textures, algae, water garbage and other floating objects;
(5) the device acquires data of a vision system and a laser radar through a data acquisition layer, preprocesses images shot by a camera through vision SOC and anisotropic diffusion filtering, reduces noise and enhances the images, simultaneously communicates with a propeller controller of the unmanned ship through a communication module of a communication interface layer, receives signals from the propeller controller, and provides current propeller propulsion direction information;
(6) the image analysis processing module establishes a sliding time window of an image sequence by utilizing a preprocessed image of the data acquisition layer, compares the image change before and after the image change, detects the changed characteristic parameters and the unchanged characteristic parameters in the image in real time, and finally establishes a floater and river bank characteristic model according to the characteristic parameters;
(7) the pose measurement module collects the unmanned ship course deflection information in real time and transmits the course deflection information to the deep nerve training module; according to the known width of a river channel and any three-point coordinates of a river bank with a radar as the origin of a space coordinate system, the unmanned ship calculates the water depth and the offshore distance of the real-time position of the unmanned ship, and judges whether grounding occurs or not according to the set early warning distance;
(8) by utilizing a genetic algorithm, the deep neural training module takes the characteristic result of the image analysis processing module as the input of a neural network model, takes the course deflection information of the pose measurement module as the output for training, and stores the network model, when the calibration work of the device is completed;
(9) if the device does not finish the calibration, repeating the steps (4) to (8) to carry out the calibration; if the device has completed calibration, repeating steps (4) and (7), and proceeding to step (10);
(10) recording the image analysis processing module as the input of the neural network model by using the stored neural network model, and guiding the overwater operation of the unmanned ship by using the information of the pose measurement module as the output;
(11) and (5) repeating the step (9) and the step (10) to start the online real-time work flow of the unmanned ship.
The invention achieves the following beneficial effects:
the method for distinguishing the water surface floater from the land boundary based on the vision and the laser radar can judge the water surface boundary and the floater, avoid safety accidents such as stranding of the unmanned ship and the like, and can measure the water depth and the offshore distance of the position of the unmanned ship in real time. In addition, the judging device can also be directly used as a third-party judging device, so that the unmanned ship or other work of the unmanned ship is facilitated.
Drawings
FIG. 1 is a diagram of the hardware architecture of the apparatus of the present invention;
FIG. 2 is a schematic view of the installation position of the apparatus of the present invention;
fig. 3 is a schematic diagram of the detection of the device of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, a device for detecting the boundary between a water surface floater and a land based on vision and laser radar comprises a data acquisition layer, a processing and judging layer and a communication interface layer;
the data acquisition layer: comprises a laser radar, a vision system and a vision processing SOC; the visual SOC realizes the time synchronization of the image data of the visual system and the point cloud data scanned by the laser radar through broadcasting network absolute time and relative time; the visual system and the laser radar scan and shoot the boundary area of the same river bank and the water surface at the same time, and the cloud data and the visual image of the river bank, algae and garbage are collected; an image preprocessing program is arranged in the vision SOC, namely, the real-time denoising and enhancement of the vision system image can be realized by adding anisotropic filtering;
the processing discrimination layer includes: the system comprises an MCU, a human-computer interaction module, a pose measurement module, an image analysis processing module and a deep neural network training module; the MCU is communicated with the pose measuring module through an SPI communication bus, the MCU is communicated with the human-computer interaction module through an RS485, and the deep neural network training module and the image analysis processing module directly exchange data in an MCU memory through a DMA;
the man-machine interaction module is used for providing manual remote control, display and audio output functions, and is used for partial setting and display work during initial installation and work of landing and harboring of the unmanned ship when the work is finished;
the position and pose measurement module determines the real-time course deflection of the unmanned ship by establishing a dynamic space coordinate system which takes a laser radar as an original point and is fixed in three coordinate axis directions, acquires the information, can determine a river bank point cloud coordinate acquired by the laser radar through the space coordinate system, and can calculate an approximate space plane equation of the river bank and the real-time position water depth of the unmanned ship in the vertical direction by utilizing a space geometric vector, so that the situation that the unmanned ship is stranded due to too shallow water depth is avoided, and the data of the position and pose measurement module is also used for a deep neural network training module training model;
the image analysis processing module acquires feature points of images of a data acquisition layer through an SIFT algorithm, acquires texture images of river banks of urban riverways, algae and water surface floaters of water garbage as visual image feature parameters, matches radar scanning point cloud maps of the same time and the same area, and transmits the visual images, the visual image feature parameters and the radar scanning point cloud maps to a deep neural network training model;
the deep neural network training module takes a visual image, a visual image characteristic parameter and a radar point cloud image matched at the same time as the input data as input, and takes course deflection information at the same time as the input data as output to carry out evolution training;
the communication interface layer is used for providing a communication interface for data transmission by a device; the communication interface layer comprises an Ethernet SOC, a Powerlink module and a CAN module.
As shown in fig. 2, which is a simplified installation position diagram of the device of the present invention, the lidar and the vision system of the device of the present invention are installed at the ceiling of the unmanned ship, and as can be seen from the top view, the vision system and the radar are installed at the positions close to the bow of the ship body on the symmetry line of the ship body, and the balance of the unmanned ship is ensured as much as possible during installation.
As shown in fig. 3, a detection schematic diagram of the device of the present invention is shown, a vision system and a radar are used for collecting water surface information, processing data of a vision SOC and processing discrimination layer, the vision system can detect a water surface floater and a river bank, the floater and land boundary can be discriminated through a training result model of a deep neural network training module, and the unmanned ship can be guided to work on the water surface by adding a pre-warning distance set at the same time.
A detection method of a boundary between a water surface floater and a land based on vision and laser radar utilizes the device and is characterized by comprising the following steps:
(1) determining the distance between a laser radar irradiation area and a bow according to the structure and the draft of the unmanned ship, and determining the early warning distance at the junction between the ship and the land and water according to the draft of the unmanned ship and the structure of a river channel;
(2) setting the irradiation direction and the irradiation angle of a current vision system and a laser radar through a man-machine interaction module, wherein the irradiation direction of the radar, the shooting direction of the vision system, the direction of the bow and the advancing direction of the unmanned ship are coplanar;
(3) after the device is installed, placing the unmanned ship in water, and entering a device calibration stage;
(4) firstly, acquiring the following special water surface visual characteristic images and radar scanning point cloud data by a manually operated unmanned ship: urban river bank textures, algae, water garbage and other floating objects;
(5) the device acquires data of a vision system and a laser radar through a data acquisition layer, preprocesses images shot by a camera through vision SOC and anisotropic filtering, reduces noise and enhances the images, simultaneously communicates with a propeller controller of the unmanned ship through a communication module of a communication interface layer, receives signals from the propeller controller and provides current propeller propulsion direction information;
(6) the image analysis processing module establishes a sliding time window of an image sequence by utilizing a preprocessed image of the data acquisition layer, compares the image change before and after the image change, detects the changed characteristic parameters and the unchanged characteristic parameters in the image in real time, and finally establishes a floater and river bank characteristic model according to the characteristic parameters;
(7) the pose measurement module collects the unmanned ship course deflection information in real time and transmits the course deflection information to the deep nerve training module; according to the known width of a river channel and any three-point coordinates of a river bank with a radar as the origin of a space coordinate system, the unmanned ship calculates the water depth and the offshore distance of the real-time position of the unmanned ship, and judges whether grounding occurs or not according to the set early warning distance;
(8) by utilizing a genetic algorithm, the deep neural training module takes the characteristic result of the image analysis processing module as the input of a neural network model, takes the course deflection information of the pose measurement module as the output for training, and stores the network model, when the calibration work of the device is completed;
(9) if the device does not finish the calibration, repeating the steps (4) to (8) to carry out the calibration; if the device has completed calibration, repeating steps (4) and (7), and proceeding to step (10);
(10) recording the image analysis processing module as the input of the neural network model by using the stored neural network model, and guiding the overwater operation of the unmanned ship by using the information of the pose measurement module as the output;
(11) and (5) repeating the step (9) and the step (10) to start the online real-time work flow of the unmanned ship.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (2)

1. The utility model provides a device for discriminating between surface of water floater and land boundary based on vision and laser radar which characterized in that: the device comprises a data acquisition layer, a processing discrimination layer and a communication interface layer;
the data acquisition layer: comprises a laser radar, a vision system and a vision processing SOC; the visual processing SOC realizes the time synchronization of the image data of the visual system and the point cloud data scanned by the laser radar through broadcasting network absolute time and relative time; the visual system and the laser radar scan and shoot the boundary area of the same river bank and the water surface at the same time, and the cloud data and the visual image of the river bank, algae and garbage are collected; an image preprocessing program is arranged in the vision processing SOC, namely, the real-time denoising and enhancement of the vision system image can be realized by adding anisotropic filtering;
the processing discrimination layer includes: the system comprises an MCU, a human-computer interaction module, a pose measurement module, an image analysis processing module and a deep neural network training module; the MCU is communicated with the pose measuring module through an SPI communication bus, the MCU is communicated with the human-computer interaction module through an RS485, and the deep neural network training module and the image analysis processing module directly exchange data in an MCU memory through a DMA;
the man-machine interaction module is used for providing manual remote control, display and audio output functions, and is used for partial setting and display work during initial installation and work of landing and harboring of the unmanned ship when the work is finished;
the position and pose measurement module determines the real-time course deflection of the unmanned ship by establishing a dynamic space coordinate system which takes a laser radar as an original point and is fixed in three coordinate axis directions, acquires the information, can determine a river bank point cloud coordinate acquired by the laser radar through the space coordinate system, and can calculate an approximate space plane equation of the river bank and the real-time position water depth of the unmanned ship in the vertical direction by utilizing a space geometric vector, so that the situation that the unmanned ship is stranded due to too shallow water depth is avoided, and the data of the position and pose measurement module is also used for a deep neural network training module training model;
the image analysis processing module acquires feature points of images of a data acquisition layer through an SIFT algorithm, acquires texture images of river banks of urban riverways, algae and water surface floaters of water garbage as visual image feature parameters, matches radar scanning point cloud maps of the same time and the same area, and transmits the visual images, the visual image feature parameters and the radar scanning point cloud maps to a deep neural network training model;
the deep neural network training module takes a visual image, a visual image characteristic parameter and a radar point cloud image matched at the same time as the input data as input, and takes course deflection information at the same time as the input data as output to carry out evolution training;
the communication interface layer is used for providing a communication interface for data transmission by a device; the communication interface layer comprises an Ethernet SOC, a Powerlink module and a CAN module.
2. A method for discriminating between a water surface floating object and a land boundary based on vision and lidar, using the apparatus of claim 1, comprising the steps of:
(1) determining the distance between a laser radar irradiation area and a bow according to the structure and the draft of the unmanned ship, and determining the early warning distance at the junction between the ship and the land and water according to the draft of the unmanned ship and the structure of a river channel;
(2) setting the irradiation direction and the irradiation angle of a current vision system and a laser radar through a man-machine interaction module, wherein the irradiation direction of the laser radar, the shooting direction of the vision system and the direction of the bow of the unmanned ship are coplanar;
(3) after the device is installed, placing the unmanned ship in water, and entering a device calibration stage;
(4) firstly, acquiring the following special water surface visual characteristic images and radar scanning point cloud data by a manually operated unmanned ship: urban river bank textures, algae, water garbage and other floating objects;
(5) the device acquires data of a vision system and a laser radar through a data acquisition layer, preprocesses images shot by a camera through vision processing SOC and anisotropic filtering, reduces noise and enhances the images, simultaneously communicates with a propeller controller of the unmanned ship through a communication module of a communication interface layer, receives signals from the propeller controller and provides current propeller propulsion direction information;
(6) the image analysis processing module establishes a sliding time window of an image sequence by utilizing a preprocessed image of the data acquisition layer, compares the image change before and after the image change, detects the changed characteristic parameters and the unchanged characteristic parameters in the image in real time, and finally establishes a floater and river bank characteristic model according to the characteristic parameters;
(7) the pose measurement module collects the unmanned ship course deflection information in real time and transmits the course deflection information to the deep nerve training module; according to the known width of a river channel and any three-point coordinates of a river bank with a radar as the origin of a space coordinate system, the unmanned ship calculates the water depth and the offshore distance of the real-time position of the unmanned ship, and judges whether grounding occurs or not according to the set early warning distance;
(8) by utilizing a genetic algorithm, the deep neural training module takes the characteristic result of the image analysis processing module as the input of a neural network model, takes the course deflection information of the pose measurement module as the output for training, and stores the network model, when the calibration work of the device is completed;
(9) if the device does not finish the calibration, repeating the steps (4) to (8) to carry out the calibration; if the device has completed calibration, repeating steps (4) and (7), and proceeding to step (10);
(10) recording the image analysis processing module as the input of the neural network model by using the stored neural network model, and guiding the overwater operation of the unmanned ship by using the information of the pose measurement module as the output;
(11) and (5) repeating the step (9) and the step (10) to start the online real-time work flow of the unmanned ship.
CN201811547930.3A 2018-12-18 2018-12-18 Device and method for distinguishing water-land boundary floaters based on vision and laser radar Active CN109613559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811547930.3A CN109613559B (en) 2018-12-18 2018-12-18 Device and method for distinguishing water-land boundary floaters based on vision and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811547930.3A CN109613559B (en) 2018-12-18 2018-12-18 Device and method for distinguishing water-land boundary floaters based on vision and laser radar

Publications (2)

Publication Number Publication Date
CN109613559A CN109613559A (en) 2019-04-12
CN109613559B true CN109613559B (en) 2022-04-22

Family

ID=66009464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811547930.3A Active CN109613559B (en) 2018-12-18 2018-12-18 Device and method for distinguishing water-land boundary floaters based on vision and laser radar

Country Status (1)

Country Link
CN (1) CN109613559B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414396B (en) * 2019-07-19 2021-07-16 中国人民解放军海军工程大学 Unmanned ship perception fusion algorithm based on deep learning
CN110371259B (en) * 2019-07-29 2021-06-01 河海大学常州校区 Near-shore water surface floater cleaning device and method
CN110632920A (en) * 2019-08-29 2019-12-31 上海海事大学 Unmanned ship control method
CN115202370B (en) * 2022-09-15 2023-02-03 泰山学院 Navigation control method and system for unmanned ship and readable storage medium
CN116642536A (en) * 2023-05-31 2023-08-25 交通运输部天津水运工程科学研究所 Breakwater structure safety monitoring and early warning system based on multi-source data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101125233B1 (en) * 2010-11-25 2012-03-21 재단법인대구경북과학기술원 Fusion technology-based security method and security system thereof
CN102975826A (en) * 2012-12-03 2013-03-20 上海海事大学 Portable ship water gauge automatic detection and identification method based on machine vision
CN103927751A (en) * 2014-04-18 2014-07-16 哈尔滨工程大学 Water surface optical visual image target area detection method based on gradient information fusion
CN108731788A (en) * 2018-05-22 2018-11-02 河海大学常州校区 A kind of working at height arm low-frequency vibration vision inspection apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101125233B1 (en) * 2010-11-25 2012-03-21 재단법인대구경북과학기술원 Fusion technology-based security method and security system thereof
CN102975826A (en) * 2012-12-03 2013-03-20 上海海事大学 Portable ship water gauge automatic detection and identification method based on machine vision
CN103927751A (en) * 2014-04-18 2014-07-16 哈尔滨工程大学 Water surface optical visual image target area detection method based on gradient information fusion
CN108731788A (en) * 2018-05-22 2018-11-02 河海大学常州校区 A kind of working at height arm low-frequency vibration vision inspection apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
红外视觉检测技术和数字信号处理器在内河船舶助航系统中的应用研究;周华;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20090615(第06期);正文第9-50页 *

Also Published As

Publication number Publication date
CN109613559A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109613559B (en) Device and method for distinguishing water-land boundary floaters based on vision and laser radar
CN109670411B (en) Ship point cloud depth image processing method and system based on generation countermeasure network
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN108693535B (en) Obstacle detection system and method for underwater robot
CN101430195B (en) Method for computing electric power line ice-covering thickness by using video image processing technology
CN105841688B (en) A kind of ship auxiliary anchors alongside the shore method and system
CN102879786B (en) Detecting and positioning method and system for aiming at underwater obstacles
CN102435174B (en) Method and device for detecting barrier based on hybrid binocular vision
CN110414396A (en) A kind of unmanned boat perception blending algorithm based on deep learning
CN104297758B (en) A kind of auxiliary berthing device and its method based on 2D pulse type laser radars
CN109711353B (en) Ship waterline area identification method based on machine vision
CN105184816A (en) Visual inspection and water surface target tracking system based on USV and detection tracking method thereof
CN110580044A (en) unmanned ship full-automatic navigation heterogeneous system based on intelligent sensing
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN110568416B (en) Radar effective detection area extraction method based on remote sensing image
CN111178295A (en) Parking space detection and model training method and device, vehicle, equipment and storage medium
CN113313005A (en) Power transmission conductor on-line monitoring method and system based on target identification and reconstruction
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN112987751B (en) System and method for quickly detecting hidden sewage draining outlet in automatic cruising mode
CN207517196U (en) Actively anti-ship hits monitoring and warning system to bridge
CN109835441A (en) The automatic Pilot method and its system of water quality monitoring intelligence canoe
CN109001756A (en) Multi-line laser radar obstacle detection system and method based on embedded device
CN111476762A (en) Obstacle detection method and device of inspection equipment and inspection equipment
WO2021168854A1 (en) Method and apparatus for free space detection
CN115060343A (en) Point cloud-based river water level detection system, detection method and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant