CN110188696A - A kind of water surface is unmanned to equip multi-source cognitive method and system - Google Patents

A kind of water surface is unmanned to equip multi-source cognitive method and system Download PDF

Info

Publication number
CN110188696A
CN110188696A CN201910467501.3A CN201910467501A CN110188696A CN 110188696 A CN110188696 A CN 110188696A CN 201910467501 A CN201910467501 A CN 201910467501A CN 110188696 A CN110188696 A CN 110188696A
Authority
CN
China
Prior art keywords
water surface
information
image
coordinate
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910467501.3A
Other languages
Chinese (zh)
Other versions
CN110188696B (en
Inventor
洪晓斌
朱坤才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Guangzhou Shipyard International Co Ltd
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910467501.3A priority Critical patent/CN110188696B/en
Priority to PCT/CN2019/089748 priority patent/WO2020237693A1/en
Publication of CN110188696A publication Critical patent/CN110188696A/en
Application granted granted Critical
Publication of CN110188696B publication Critical patent/CN110188696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of unmanned equipment multi-source cognitive method of water surface and system, method includes obtaining the environmental information of the unmanned equipment of the water surface;Preparatory collected Surface Picture is demarcated, Deeplab model and Faster RCNN model are trained using the data set demarcated and save network model parameter;The Surface Picture inputted in real time is divided into background, land and the water surface by Deeplab model, water surface boundary line is extracted according to the circumference of water-surface areas;The prediction block of water hazard object is extracted by Faster RCNN network model, is calculated the friendship between prediction block and the water-surface areas of image, semantic segmentation network output and ratio, is rejected meaningless detection of obstacles result;Camera calibration is carried out, camera internal reference and outer ginseng are obtained, carries out the combined calibrating of three-dimensional laser radar and camera, obtains the coordinate transformation relation between radar and camera in conjunction with calibration result;The three dimensional point cloud that laser radar obtains is projected on the image that camera obtains according to coordinate transformation relation, adds depth information to image, then finally obtain the world coordinates of barrier and water surface boundary line by the conversion of camera coordinates system-world coordinate system coordinate.

Description

A kind of water surface is unmanned to equip multi-source cognitive method and system
Technical field
The present invention relates to the technical field of the unmanned equipment Study of intelligent of the water surface, in particular to the unmanned equipment of a kind of water surface Multi-source cognitive method and system.
Background technique
The unmanned equipment of the water surface is that have nonlinearity dynamic characteristic, can be in the case where unmanned intervene various multiple The novel carriers that task is executed under miscellaneous unknown aquatic environment, have many advantages, such as small, intelligent, autonomy-oriented, Chang Beiyong The task that danger coefficient is high, operating environment is severe is executed, is had in fields such as military combat, sea area patrol, islands and reefs supplies wide General application demand.Since the intelligent realization process of the unmanned equipment of the water surface depends on the quality of its environment sensing ability first, Good environment perception method and system can provide for the autonomous intelligence decision process of the unmanned equipment of the water surface and its important ring Border prior information, to maintain the safety of its operation, accuracy and reliability.Therefore, it is more to study a kind of unmanned equipment of the water surface Source cognitive method and system realize the unmanned equipment of the water surface for promoting the autonomous intelligence progress of the unmanned equipment of the water surface Effective operation has great importance.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology and deficiency, provides a kind of unmanned equipment multi-source sense of the water surface Perception method and system.The present invention passes through building Surface Picture data set pair for the multi-source perception problems of the unmanned equipment of the water surface Deeplab and Faster RCNN network model is trained, to realize the identification of water surface boundary line and water hazard object.Root According to the combined calibrating between camera and three-dimensional laser radar as a result, the three dimensional point cloud that laser radar obtains is projected to phase On the image that machine obtains, depth information is added to image, then final by the conversion of camera coordinates system-world coordinate system coordinate The world coordinates information of barrier and water surface boundary line is obtained, and passes through the topic of ROS (Robot Operating System) Communication mechanism to application module, believes the information real-time delivery to provide priori environment for the unmanned decision for equipping next step Breath.The purpose of the present invention is realized by the following technical solution:
A kind of unmanned equipment multi-source cognitive method of the water surface comprising the steps of:
The sensing parameter of S1, the in real time unmanned equipment multi-source sensory perceptual system of the acquisition water surface, obtains the visual information of aquatic environment And three-dimensional point cloud information;
S2, preparatory collected Surface Picture is manually demarcated, utilizes the data set pair manually demarcated Deeplab model and Faster RCNN model are trained and save network model parameter;
S3, the Surface Picture inputted in real time is divided by background, land and water surface three classes by Deeplab model, according to The circumference of water-surface areas extracts water surface boundary line;
S4, the prediction block that water hazard object is extracted by Faster RCNN network model, calculate separately ship and floating Friendship and ratio between object prediction block and the water-surface areas of image, semantic segmentation network output, reject meaningless detection of obstacles As a result;
S5, camera calibration is carried out, obtains camera internal reference and outer ginseng, then carry out the joint of three-dimensional laser radar and camera Calibration obtains the coordinate transformation relation between radar and camera in conjunction with calibration result;
S6, the three dimensional point cloud that laser radar obtains is projected into the image that camera obtains according to coordinate transformation relation On, to image add depth information, then by camera coordinates system-world coordinate system coordinate conversion finally obtain barrier and The world coordinates of water surface boundary line.
Further, the step S1 specifically: the visual information for obtaining Surface Picture in real time using camera uses three It ties up laser radar and carries out real time scan to fan-shaped region to before unmanned equipment, obtain the three-dimensional point cloud information of aquatic environment;
Further, the step S2 specifically: Pixel-level is demarcated as from top to bottom by preparatory collected Surface Picture Background, land and water surface three classes, for the training of Deeplab network model.Barrier candidate frame in Surface Picture is demarcated as Two class of ship and floating material, for the training of Faster RCNN network model, to construct Surface Picture data set.It will be used to instruct Experienced image data set inputs Deeplab network and Faster RCNN network respectively, iterates to restraining and save network The weight distribution and bias of model;
Further, the step S3 specifically: input real-time collected Surface Picture to trained Deeplab Network, referring to fig. 2, original image input by multiple convolutional layer and pond layer obtain a characteristic image, in order to obtain and The output image for exporting image same size, is amplified characteristic image by deconvolution, finally using the condition that is fully connected with The ability of model capture details is improved on airport (CRF), ensure that the segmentation of the pixel scale of land and the water surface.For acquisition Semantic segmentation is as a result, obtain the pixel coordinate value at water surface line of demarcation by image procossing, and by the pixel at water surface line of demarcation Coordinate set is transferred to information aggregators.
Further, the step S4 specifically: input real-time collected Surface Picture to trained Faster RCNN network passes sequentially through shared convolutional layer, RPN network, the pond ROI layer and full articulamentum, final output by propagated forward Barrier existing for input picture is divided into two class of ship and floating material, calculates Faster by the object detection results of image Friendship and ratio between the prediction block of RCNN network output and the water-surface areas of image, semantic segmentation network output, for being classified as The prediction block of floating material, given threshold 0.8, the result lower than this threshold value will be given and reject;For being classified as the prediction of ship Frame, given threshold 0.1, the result lower than this threshold value will be given and reject;;
Further, the step S5 is specially to use gridiron pattern standardization, is chosen respectively in different angle different location Several angle points on gridiron pattern determine the coordinate in the camera coordinates system of these angle points, the coordinate in world coordinate system and thunder Up to the coordinate in coordinate system, respective coordinates are substituted into the mathematical model of camera calibration and combined calibrating, simultaneous solution obtains phase Three rotation parameters (spin matrix), three translation parameters (translation matrix) in machine-radar fix transfer equation and one Scale factor and spin matrix and translation matrix in camera-world coordinates transfer equation, so that it is determined that coordinate conversion side The concrete form of journey.
Further, the step S6 specifically: in information aggregators, according to laser radar coordinate system and camera The point cloud coordinate that laser radar obtains is converted to camera coordinates, then passes through camera coordinates system and picture by the transfer equation of coordinate system A cloud is projected to imaging plane by the transformational relation between plain coordinate system, so that image has depth information.Finally will The pixel coordinate information and depth information of the water surface boundary line of the prediction block and Deeplab model output of Faster RCNN output Combine generation three-dimensional coordinate, corresponding world coordinates is converted to according to the Camera extrinsic that camera calibration obtains, so that it is determined that The specific location of barrier and water surface line of demarcation in world coordinate system.
A kind of unmanned equipment multi-source sensory perceptual system of the water surface based on ROS, including perceive and apply two parts:
Perception part establishes point cloud information by the Node Mechanism of ROS and handles node, Image Information Processing node and letter Cease aggregators.Image Information Processing intra-node contains two convolutional network moulds of Faster RCNN and Deeplab model Type, image is handled by convolutional neural networks can be with the pixel coordinate information of acquired disturbance object prediction block and water surface boundary line, should Information is transferred to the processing that information aggregators wait next step by the topic subscribing mechanism of ROS;Point cloud information handles node Point cloud information is converted into the standard coordinate format under laser radar coordinate system, and will point cloud coordinate by topic communication mechanism Information is transferred to information aggregators.In information aggregators, according to the conversion side of laser radar coordinate system and camera coordinates system A cloud coordinate is converted to camera coordinates by journey, then by the transformational relation between camera coordinates system and pixel coordinate system, by point Cloud projects to imaging plane, so that image has depth information, thus to obtain the three-dimensional coordinate of image;It finally will be according to camera Image three-dimensional coordinate is converted to corresponding world coordinates by outer ginseng, so that it is determined that barrier and water surface line of demarcation are sat in the world Specific location in mark system.
Application obscure portions include ROS different type functional node, including avoidance node, tracking node and path planning node Deng.Avoidance node obtains the world coordinates of barrier and water surface boundary line by topic that subscription information aggregators are issued Information, and vector field histogram is established by VFH+ obstacle avoidance algorithm, present feasible avoidance side can be determined by the histogram To.Tracking node obtains video sequence and forecasting-obstacle frame in image by subscribing to image topic and target detection topic On pixel coordinate information, by manual frame select determine tracking target after, activate CF target tracking algorism, calculated by the tracking Coordinate information of the target in each frame image can be selected by output box in real time after the characteristic matching and filtering processing of method, thus real Existing following function.Path planning node subscribes to semantic segmentation topic and information and merges topic, by segmented image obtain the water surface and Then barrier pixel coordinate obtains its rough world coordinates information further according to information fusion topic, can be with by the information A local map is established, RRT searching algorithm is used on this map, obtains the feasible pass of current local map.
Compared with the prior art, the invention has the following advantages and beneficial effects:
The present invention realizes the extraction of water surface boundary line using Deeplab network model, with traditional sea horizon detection method phase Than changed by aquatic environment influenced it is smaller, have better system generalization ability, both adapt to the sea with obvious linear feature Antenna detection is also applied for the coastline Detection Method of seashore geometrical characteristic complexity;Hindered using Faster RCNN network model Hinder the coarse extraction of object candidate frame, and is merged with the three dimensional point cloud that laser radar obtains in real time, it can be in sensing parameter It is realized in the case where redundancy detection to barrier more accurately three-dimensional description;By the distributed communication mechanism of ROS, can protect Card sensing fuse information is perceived system after updating at the first time and obtains and handled in real time;Pass through camera and three-dimensional laser As a result, establishing the corresponding relationship between visual identity result and world coordinates, be that the water surface is unmanned fills combined calibrating between radar Standby subsequent intelligent decision making provides prior information.Multi-source cognitive method proposed by the present invention and system realize the water surface nobody Equip complete description to aquatic environment key message, be widely portable to the intelligent navigation of the unmanned equipment of the various waters surface with Control.
Detailed description of the invention
Fig. 1 is a kind of method flow diagram of the unmanned equipment multi-source cognitive method of water surface;
Fig. 2 is the Deeplab network architecture based on VGG16 in embodiment;
Fig. 3 is the Faster RCNN network architecture based on AlexNet in embodiment;
Fig. 4 is a kind of unmanned equipment multi-source sensory perceptual system schematic diagram of water surface based on ROS.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are not It is limited to this.
Embodiment:
Referring to Fig. 1, the unmanned equipment multi-source cognitive method of the water surface is planted, comprising the following steps:
Step 10 acquires the sensing parameter of the unmanned equipment multi-source sensory perceptual system of the water surface in real time, obtains the vision of Surface Picture The three-dimensional point cloud information of information and aquatic environment;
Step 20 manually demarcates preparatory collected Surface Picture, using the data set demarcated to Deeplab Model and Faster RCNN model are trained and save network model parameter;
The Surface Picture inputted in real time is divided into background, land and water surface three classes by Deeplab model by step 30, Water surface boundary line is extracted according to the circumference of water-surface areas;
Step 40 extracts the prediction block of water hazard object by Faster RCNN network model, calculates separately ship and drift Friendship and ratio between floating object prediction block and the water-surface areas of image, semantic segmentation network output, reject meaningless obstacle quality testing Survey result;
Step 50 carries out camera calibration, obtains camera internal reference and outer ginseng, then carries out the connection of three-dimensional laser radar and camera Calibration is closed, obtains the coordinate transformation relation between radar and camera in conjunction with calibration result;
The three dimensional point cloud that laser radar obtains is projected to the figure that camera obtains according to coordinate transformation relation by step 60 As upper, depth information is added to image, then barrier is finally obtained by the conversion of camera coordinates system-world coordinate system coordinate With the world coordinates of water surface boundary line.
Above-mentioned steps 20 are specifically included Pixel-level is demarcated as background, land to collected Surface Picture from top to bottom in advance Three major class in ground and the water surface, for the training of Deeplab network model.Barrier candidate frame in Surface Picture is demarcated as ship Only and two major class of floating material, trained for Faster RCNN network model, to construct Surface Picture data set.It will be used for Trained image data set inputs Deeplab network and Faster RCNN network respectively, iterates to restraining and save net The weight distribution and bias of network model;
Above-mentioned steps 30 specifically include input in real time collected Surface Picture to trained Deeplab network, referring to Fig. 2, original image input extract characteristics of image by convolutional layer and obtain corresponding characteristic pattern, then pass through pond layer compression again Characteristic pattern extracts main feature, after the feature extraction of multilayer convolutional layer and pond layer and the Feature Compression, can get deep layer Secondary characteristic pattern.Deeplab is by being changed to no down-sampling pond layer for the pond layer of the 4th layer and layer 5, it is ensured that special The size of sign figure remains unchanged, and at the same time, this two layers of subsequent convolutional layer of pond layer is changed to empty convolutional layer, to protect The neuron receptive field of card Chi Huahou does not change.Finally, it is big that characteristic image is amplified to original input image by deconvolution It is small, the ability of model capture details is then improved using the condition random field (CRF) being fully connected, and ensure that land and the water surface Pixel scale segmentation.Semantic segmentation for acquisition is sat as a result, obtaining the pixel at water surface line of demarcation by image procossing Scale value, and the pixel coordinate collection at water surface line of demarcation is transferred to information aggregators.
Deeplab network model is based on VGG16 and is constructed, and removes adopt under VGG16 most latter two pond layer first Then the two subsequent convolution kernels of pond layer are changed to empty convolution by sample, finally replace three of VGG16 full articulamentums For convolutional layer, the full convolutional coding structure of Deeplab model is realized.In order to obtain the output of size identical as original image, using deconvolution Method deconvolution is carried out to the characteristic pattern that obtains behind pond and process of convolution, to obtain one big with input image size Small identical segmented image finally carries out details optimization to land and water segmented image using full connection Stochastic Conditions field, to obtain The fine segmented image of one water surface border-line edge.
Above-mentioned steps 40 specifically include input in real time collected Surface Picture to trained Faster RCNN network, Faster RCNN network model is based on AlexNet convolutional neural networks and is constructed, specifically by Fast RCNN network and RPN net Network two large divisions constitute, wherein the shared convolutional layer of Fast RCNN network and RPN network by AlexNet first five layer of convolution mind It is constituted through network, the third pond layer of AlexNet is revised as the pond ROI layer, retains two layers of full articulamentum of AlexNet, will The last layer Softmax classifier is revised as selecting the linear regressor of water hazard object for frame and for ship and floating material Linear regressor+Softmax classifier the layer of classification.Referring to Fig. 3, Surface Picture extracts original image by shared convolutional layer first The characteristic pattern of original image is sent into RPN network structure thereafter by the characteristic pattern of picture.For sharing the characteristic pattern of convolutional layer output, Convolution sliding is carried out by the convolution kernel of 3*3 and generates sliding window, and generates 9 in the central point of each sliding window Anchor point frame.According to the mapping relations between sliding window and original image characteristic pattern, each anchor point frame can be obtained from original image Characteristic pattern, by these characteristic patterns by propagated forward enter full articulamentum generate feature vector.Then by feature vector Softmax classifier and linear regressor are respectively fed to carry out target classification and positioning.Anchor point frame is simplified, selection region obtains Dividing high anchor point frame is suggestion areas.By suggestion areas and original image characteristic pattern that RPN network exports while the pond ROI layer is inputted, The characteristic pattern for extracting suggestion areas corresponding position enters full articulamentum by propagated forward and generates feature vector, finally by Softmax classifier and linear regressor generate the target prediction frame after final territorial classification score and recurrence, thus by defeated Enter barrier existing for image and is divided into two major class of ship and floating material.Calculate Faster RCNN network output prediction block with Image, semantic divides the friendship between the water-surface areas of network output and ratio, for being classified as the prediction block of floating material, given threshold It is 0.8, the result lower than this threshold value will be given and reject;For being classified as the prediction block of ship, given threshold 0.1 is lower than this The result of threshold value will be given and reject;
Above-mentioned steps 50 are specifically included using gridiron pattern standardization, choose gridiron pattern respectively in different angle different location On several angle points, determine the coordinate in the camera coordinates system of these angle points, the coordinate in world coordinate system and radar fix Coordinate in system substitutes into respective coordinates in the mathematical model of camera calibration and combined calibrating, and simultaneous solution obtains camera-thunder Up to three rotation parameters (both spin matrixs), three translation parameters (both translation matrix) and the ruler in coordinate transfer equation The factor and spin matrix and translation matrix in camera-world coordinates transfer equation are spent, so that it is determined that coordinate transfer equation Concrete form.
Above-mentioned steps 60 are specifically included in information aggregators, according to laser radar coordinate system and camera coordinates system The point cloud coordinate that laser radar obtains is converted to camera coordinates, then passes through camera coordinates system and pixel coordinate system by transfer equation Between transformational relation, a cloud is projected into imaging plane so that image have depth information.It is finally that Faster RCNN is defeated The pixel coordinate information and depth information of the water surface boundary line of prediction block and the output of Deeplab model out combine generation Three-dimensional coordinate is converted to corresponding world coordinates according to the Camera extrinsic that camera calibration obtains, so that it is determined that barrier and the water surface Specific location of the line of demarcation in world coordinate system.
Referring to fig. 4, the unmanned equipment multi-source sensory perceptual system of a kind of water surface based on ROS, ROS message processing module include sense Know and apply two parts.
Perception part establishes three nodes by the Node Mechanism of ROS, is point cloud information processing node, image respectively Information processing node and information aggregators.Image Information Processing intra-node contains Faster RCNN and Deeplab mould Two convolutional network models of type, image is handled by convolutional neural networks can be with acquired disturbance object prediction block and water surface boundary line Pixel coordinate information, the information by the topic subscribing mechanism of ROS be transferred to information aggregators wait next step place Reason;Point cloud information handles node and point cloud information is converted to the standard coordinate format under laser radar coordinate system, and passes through topic A cloud coordinate information is transferred to information aggregators by communication mechanism.In information aggregators, according to laser radar coordinate system and A cloud coordinate is converted to camera coordinates by the transfer equation of camera coordinates system, then by camera coordinates system and pixel coordinate system it Between transformational relation, a cloud is projected into imaging plane so that image have depth information, thus to obtain image three-dimensional sit Mark;Finally image three-dimensional coordinate will be converted into corresponding world coordinates according to Camera extrinsic, so that it is determined that barrier and water The specific location in world coordinate system in face line of demarcation.
Application obscure portions include ROS different type functional node, including avoidance node, tracking node and path planning node Deng.Avoidance node obtains the world coordinates of barrier and water surface boundary line by topic that subscription information aggregators are issued Information, and vector field histogram is established by VFH+ obstacle avoidance algorithm, present feasible avoidance side can be determined by the histogram To.Tracking node obtains video sequence and forecasting-obstacle frame in image by subscribing to image topic and target detection topic On pixel coordinate information, by manual frame select determine tracking target after, activate CF target tracking algorism, calculated by the tracking Coordinate information of the target in each frame image can be selected by output box in real time after the characteristic matching and filtering processing of method, thus real Existing following function.Path planning node subscribes to semantic segmentation topic and information and merges topic, by segmented image obtain the water surface and Then barrier pixel coordinate obtains its rough world coordinates information further according to information fusion topic, can be with by the information A local map is established, RRT searching algorithm is used on this map, obtains the feasible pass of current local map.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, it is other it is any without departing from the spirit and principles of the present invention made by change, modification, substitution, combination, letter Change, should be equivalent substitute mode, be included within the scope of the present invention.

Claims (9)

1. a kind of unmanned equipment multi-source cognitive method of water surface, which comprises the following steps:
The sensing parameter of S1, the in real time unmanned equipment multi-source sensory perceptual system of the acquisition water surface, obtains the visual information and water of Surface Picture The three-dimensional point cloud information in face ring border;
S2, preparatory collected Surface Picture is manually demarcated, using the data set demarcated to Deeplab model with Faster RCNN model is trained and saves network model parameter;
S3, the Surface Picture inputted in real time is divided by background, land and water surface three classes by Deeplab model, according to the water surface The circumference in region extracts water surface boundary line;
S4, the prediction block that water hazard object is extracted by Faster RCNN network model, calculate separately ship and floating material prediction Friendship and ratio between frame and the water-surface areas of image, semantic segmentation network output, reject meaningless detection of obstacles result;
S5, camera calibration is carried out, obtains camera internal reference and outer ginseng, then carry out the combined calibrating of three-dimensional laser radar and camera, The coordinate transformation relation between radar and camera is obtained in conjunction with calibration result;
S6, the three dimensional point cloud that laser radar obtains is projected on the image that camera obtains according to coordinate transformation relation, to Image adds depth information, then finally obtains barrier and water surface side by the conversion of camera coordinates system-world coordinate system coordinate The world coordinates in boundary line.
2. the unmanned equipment multi-source cognitive method of the water surface according to claim 1, which is characterized in that the mark in the step S2 It is fixed specifically: by Surface Picture, Pixel-level is demarcated as background, land and water surface three classes from top to bottom, is used for Deeplab network mould Type training;Barrier candidate frame in Surface Picture is demarcated as two class of ship and floating material, is used for Faster RCNN network mould Type training.
3. the unmanned equipment multi-source cognitive method of the water surface according to claim 1, which is characterized in that in the step S3 Deeplab network model is based on VGG16 and is constructed, and removes the down-sampling of VGG16 most latter two pond layer first, then by this Two subsequent convolution kernels of pond layer are changed to empty convolution, and three of VGG16 full articulamentums are finally replaced with convolutional layer, realize The full convolutional coding structure of Deeplab model;In order to obtain the output of size identical as original image, using the method for deconvolution to Chi Huahe The characteristic pattern obtained after process of convolution carries out deconvolution, to obtain a segmentation figure identical with input image size size Picture finally carries out details optimization to land and water segmented image using full connection Stochastic Conditions field, to obtain a water surface boundary line The fine segmented image in edge.
4. the unmanned equipment multi-source cognitive method of the water surface according to claim 1, which is characterized in that in the step S4 Faster RCNN network model is based on AlexNet convolutional neural networks and is constructed, specifically by Fast RCNN network and RPN net Network constitute, wherein the shared convolutional layer of Fast RCNN network and RPN network by AlexNet first five layer of convolutional neural networks structure At the third pond layer of AlexNet is revised as the pond ROI layer, retains two layers of full articulamentum of AlexNet, by the last layer Softmax classifier is revised as selecting the linear regressor of water hazard object for frame and classify for ship and floating material linear Return device+Softmax classifier layer;And in RPN network, the convolutional layer that one layer of convolution kernel is 3*3 is added to extract sliding window Mouthful, it is followed by full articulamentum and extracts feature vector, be finally the Softmax classifier for carrying out regional evaluation to input feature value Layer and frame return layer.
5. the unmanned equipment multi-source cognitive method of the water surface according to claim 1, which is characterized in that nothing in the step S4 The rejecting process of meaning testing result specifically: the ratio of entire rectangle frame is accounted for the intersection of forecasting-obstacle frame and water-surface areas The reasonability of testing result is judged as index;For being classified as the prediction block of floating material, given threshold 0.8 is lower than this The result of threshold value will be given and reject;For being classified as the prediction block of ship, given threshold 0.1, the result lower than this threshold value will It gives and rejects.
6. the unmanned equipment multi-source cognitive method of the water surface according to claim 1, which is characterized in that the step S6 is specific Are as follows: according to the transfer equation of laser radar coordinate system and camera coordinates system, the point cloud coordinate that laser radar obtains is converted into phase Machine coordinate, then by the transformational relation between camera coordinates system and pixel coordinate system, a cloud is projected into imaging plane, so that figure As having depth information;The picture for the water surface boundary line for finally exporting the Faster RCNN prediction block exported and Deeplab model Plain coordinate information and depth information combine generation three-dimensional coordinate, are converted to correspondence according to the Camera extrinsic that camera calibration obtains World coordinates, so that it is determined that the specific location of barrier and water surface line of demarcation in world coordinate system.
7. a kind of unmanned equipment multi-source sensory perceptual system of water surface, which is characterized in that the sensory perceptual system is using ROS processing module as core The heart covers an integration module of the unmanned equipment information transmitting of the water surface, information fusion and information output function, the ROS letter Ceasing processing module includes perceiving and applying two parts.
8. the unmanned equipment multi-source sensory perceptual system of the water surface according to claim 1, which is characterized in that the perception part passes through The Node Mechanism of ROS establishes three nodes, is point cloud information processing node, Image Information Processing node and information fusion respectively Node;
The point cloud information processing node obtains point cloud information by network interface, and point cloud information is converted to laser radar coordinate system Under standard coordinate format, a cloud coordinate information is transferred to information aggregators finally by topic communication mechanism;
Described image information processing node by serial ports read image information, the intra-node combination Faster RCNN with Two convolutional network models of Deeplab model, image is handled by convolutional neural networks can be with acquired disturbance object prediction block and water The pixel coordinate information of face boundary line, the information are transferred to other nodes by the topic subscribing mechanism of ROS and wait next step Processing;
The information aggregators obtain corresponding point cloud information and image letter by subscription point cloud node topic and image topic Breath, according to the transfer equation of laser radar coordinate system and camera coordinates system, is converted to camera coordinates for a cloud coordinate, then pass through phase A cloud is projected to imaging plane by the transformational relation between machine coordinate system and pixel coordinate system, so that image has depth information, Thus to obtain the three-dimensional coordinate of image, finally image three-dimensional coordinate will be converted into corresponding world coordinates according to Camera extrinsic, So that it is determined that the specific location in world coordinate system of barrier and water surface line of demarcation.
9. the unmanned equipment multi-source sensory perceptual system of the water surface according to claim 1, which is characterized in that the application obscure portions cover ROS different type functional node, including avoidance node, tracking node and path planning node, the distribution that each node passes through ROS Communication mechanism is communicated;ROS obtains all nodal informations of the unmanned change system of the water surface by node manager and topic is believed Breath, and node can be subscribed at once after updating by subscription with issue mechanism guarantee fuse information and perceived to obtain newest letter Breath, to meet the real-time avoidance and path planning requirement of the unmanned equipment of the water surface;Pass through the topic communication equipment of application ROS System will upload in real time corresponding topic after sensor information fusion that sense part is separately won and release, and application node is subscribed to The topic obtains fuse information when the message file of topic updates by limiting message queue as 1 at the first time, and according to The information carries out corresponding avoidance and path planning movement, guarantees that unmanned equipment is perceived simultaneously in face of the variation of environment with first time Make fast reaction movement.
CN201910467501.3A 2019-05-31 2019-05-31 Multi-source sensing method and system for unmanned surface equipment Active CN110188696B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910467501.3A CN110188696B (en) 2019-05-31 2019-05-31 Multi-source sensing method and system for unmanned surface equipment
PCT/CN2019/089748 WO2020237693A1 (en) 2019-05-31 2019-06-03 Multi-source sensing method and system for water surface unmanned equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910467501.3A CN110188696B (en) 2019-05-31 2019-05-31 Multi-source sensing method and system for unmanned surface equipment

Publications (2)

Publication Number Publication Date
CN110188696A true CN110188696A (en) 2019-08-30
CN110188696B CN110188696B (en) 2023-04-18

Family

ID=67719245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910467501.3A Active CN110188696B (en) 2019-05-31 2019-05-31 Multi-source sensing method and system for unmanned surface equipment

Country Status (2)

Country Link
CN (1) CN110188696B (en)
WO (1) WO2020237693A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705623A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Sea-sky-line on-line detection method based on full convolution neural network
CN110763306A (en) * 2019-09-30 2020-02-07 中国科学院西安光学精密机械研究所 Monocular vision-based liquid level measurement system and method
CN111144208A (en) * 2019-11-22 2020-05-12 北京航天控制仪器研究所 Automatic detection and identification method for marine vessel target and target detector
CN111179300A (en) * 2019-12-16 2020-05-19 新奇点企业管理集团有限公司 Method, apparatus, system, device and storage medium for obstacle detection
CN111354045A (en) * 2020-03-02 2020-06-30 清华大学 Visual semantic and position sensing method and system based on infrared thermal imaging
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111881932A (en) * 2020-06-11 2020-11-03 中国人民解放军战略支援部队信息工程大学 FasterRCNN target detection algorithm for military aircraft
CN111899301A (en) * 2020-06-02 2020-11-06 广州中国科学院先进技术研究所 Workpiece 6D pose estimation method based on deep learning
CN112101222A (en) * 2020-09-16 2020-12-18 中国海洋大学 Sea surface three-dimensional target detection method based on unmanned ship multi-mode sensor
CN112529072A (en) * 2020-12-07 2021-03-19 中国船舶重工集团公司七五0试验场 Underwater buried object identification and positioning method based on sonar image processing
CN112541886A (en) * 2020-11-27 2021-03-23 北京佳力诚义科技有限公司 Laser radar and camera fused artificial intelligence ore identification method and device
CN112567383A (en) * 2020-03-06 2021-03-26 深圳市大疆创新科技有限公司 Object detection method, movable platform, device and storage medium
CN112652064A (en) * 2020-12-07 2021-04-13 中国自然资源航空物探遥感中心 Sea-land integrated three-dimensional model construction method and device, storage medium and electronic equipment
CN112666534A (en) * 2020-12-31 2021-04-16 武汉理工大学 Unmanned ship route planning method and device based on laser radar recognition algorithm
CN112733753A (en) * 2021-01-14 2021-04-30 江苏恒澄交科信息科技股份有限公司 Bridge orientation identification method and system combining convolutional neural network and data fusion
CN112927237A (en) * 2021-03-10 2021-06-08 太原理工大学 Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN113033572A (en) * 2021-04-23 2021-06-25 上海海事大学 Obstacle segmentation network based on USV and generation method thereof
CN113159042A (en) * 2021-03-30 2021-07-23 苏州市卫航智能技术有限公司 Laser vision fusion unmanned ship bridge opening passing method and system
CN113362395A (en) * 2021-06-15 2021-09-07 上海追势科技有限公司 Sensor fusion-based environment sensing method
CN113485375A (en) * 2021-08-13 2021-10-08 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113936198A (en) * 2021-11-22 2022-01-14 桂林电子科技大学 Low-beam laser radar and camera fusion method, storage medium and device
CN114076937A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Laser radar and camera combined calibration method and device, server and computer readable storage medium
CN114332647A (en) * 2021-12-31 2022-04-12 合肥工业大学 River channel boundary detection and tracking method and system for unmanned ship
CN114527468A (en) * 2021-12-28 2022-05-24 湖北三江航天红峰控制有限公司 Special scene personnel detection system based on laser radar
CN114692731A (en) * 2022-03-09 2022-07-01 华南理工大学 Environment perception fusion method and system based on monocular vision and laser ranging array
CN114863258A (en) * 2022-07-06 2022-08-05 四川迪晟新达类脑智能技术有限公司 Method for detecting small target based on visual angle conversion in sea-sky-line scene
CN115015911A (en) * 2022-08-03 2022-09-06 深圳安德空间技术有限公司 Method and system for manufacturing and using navigation map based on radar image
CN116310999A (en) * 2023-05-05 2023-06-23 贵州中水能源股份有限公司 Method for detecting large floaters in reservoir area of hydroelectric power station

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258590B (en) * 2020-12-08 2021-04-27 杭州迦智科技有限公司 Laser-based depth camera external parameter calibration method, device and storage medium thereof
CN114764906B (en) * 2021-01-13 2024-09-06 长沙中车智驭新能源科技有限公司 Multi-sensor post-fusion method for automatic driving, electronic equipment and vehicle
CN112819737B (en) * 2021-01-13 2023-04-07 西北大学 Remote sensing image fusion method of multi-scale attention depth convolution network based on 3D convolution
CN112861653B (en) * 2021-01-20 2024-01-23 上海西井科技股份有限公司 Method, system, equipment and storage medium for detecting fused image and point cloud information
CN112801194B (en) * 2021-02-03 2023-08-25 大连海事大学 Marine radar rainfall analysis method based on improved AlexNet
CN113075683B (en) * 2021-03-05 2022-08-23 上海交通大学 Environment three-dimensional reconstruction method, device and system
CN113052066B (en) * 2021-03-24 2022-09-02 中国科学技术大学 Multi-mode fusion method based on multi-view and image segmentation in three-dimensional target detection
CN113093746B (en) * 2021-03-31 2024-01-23 上海三一重机股份有限公司 Working machine environment sensing method, device and system and working machine
CN113111751B (en) * 2021-04-01 2024-06-04 西北工业大学 Three-dimensional target detection method capable of adaptively fusing visible light and point cloud data
CN113093254A (en) * 2021-04-12 2021-07-09 南京速度软件技术有限公司 Multi-sensor fusion based vehicle positioning method in viaduct with map features
CN113160316B (en) * 2021-04-25 2023-01-06 华南理工大学 Method and system for extracting fan-shaped convolution characteristics of non-rigid three-dimensional shape
CN113177593B (en) * 2021-04-29 2023-10-27 上海海事大学 Fusion method of radar point cloud and image data in water traffic environment
CN113281723B (en) * 2021-05-07 2022-07-22 北京航空航天大学 AR tag-based calibration method for structural parameters between 3D laser radar and camera
CN113160217B (en) * 2021-05-12 2024-08-20 北京京东乾石科技有限公司 Method, device, equipment and storage medium for detecting circuit foreign matters
CN113686314B (en) * 2021-07-28 2024-02-27 武汉科技大学 Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
CN113696178B (en) * 2021-07-29 2023-04-07 大箴(杭州)科技有限公司 Control method and system, medium and equipment for intelligent robot grabbing
CN113587933B (en) * 2021-07-29 2024-02-02 山东山速机器人科技有限公司 Indoor mobile robot positioning method based on branch-and-bound algorithm
CN113532424B (en) * 2021-08-10 2024-02-20 广东师大维智信息科技有限公司 Integrated equipment for acquiring multidimensional information and cooperative measurement method
CN113850304B (en) * 2021-09-07 2024-06-18 辽宁科技大学 High-accuracy point cloud data classification segmentation improvement method
CN113808219B (en) * 2021-09-17 2024-05-14 西安电子科技大学 Deep learning-based radar auxiliary camera calibration method
CN113970753B (en) * 2021-09-30 2024-04-30 南京理工大学 Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection
CN113984037B (en) * 2021-09-30 2023-09-12 电子科技大学长三角研究院(湖州) Semantic map construction method based on target candidate frame in any direction
CN114037972B (en) * 2021-10-08 2024-08-13 岚图汽车科技有限公司 Target detection method, device, equipment and readable storage medium
CN114067353B (en) * 2021-10-12 2024-04-02 北京控制与电子技术研究所 Method for realizing multi-source data fusion by adopting multifunctional reinforcement processor
CN114140675B (en) * 2021-10-29 2024-08-20 广西民族大学 Sugarcane seed screening system and method based on deep learning
CN113989350B (en) * 2021-10-29 2024-04-02 大连海事大学 Unmanned ship autonomous exploration and unknown environment three-dimensional reconstruction monitoring system
CN114088082B (en) * 2021-11-01 2024-04-16 广州小鹏自动驾驶科技有限公司 Map data processing method and device
CN114063619B (en) * 2021-11-15 2023-09-19 浙江大学湖州研究院 Unmanned ship obstacle detection and breaking method based on carpet type scanning mode
CN114089675B (en) * 2021-11-23 2023-06-09 长春工业大学 Machine control method and system based on man-machine distance
CN114359181B (en) * 2021-12-17 2024-01-26 上海应用技术大学 Intelligent traffic target fusion detection method and system based on image and point cloud
CN114359861B (en) * 2021-12-20 2024-07-02 尚元智行(宁波)科技有限公司 Intelligent vehicle obstacle recognition deep learning method based on vision and laser radar
CN114112945A (en) * 2021-12-31 2022-03-01 安徽大学 Novel honeycomb lake cyanobacterial bloom monitoring system
CN114403114B (en) * 2022-01-26 2022-11-08 安徽农业大学 High-ground-clearance plant protection locomotive body posture balance control system and method
CN114648579A (en) * 2022-02-15 2022-06-21 浙江零跑科技股份有限公司 Multi-branch input laser radar target detection method
CN114879180B (en) * 2022-03-22 2024-08-30 大连海事大学 Seamless situation awareness method for real-time fusion of unmanned ship-borne multi-element multi-scale radar
CN114677531B (en) * 2022-03-23 2024-07-09 东南大学 Multi-mode information fusion method for detecting and positioning targets of unmanned surface vehicle
CN114779275B (en) * 2022-03-24 2024-06-11 南京理工大学 Automatic following obstacle avoidance method for mobile robot based on AprilTag and laser radar
CN115100287B (en) * 2022-04-14 2024-09-03 美的集团(上海)有限公司 External parameter calibration method and robot
CN114879685B (en) * 2022-05-25 2023-04-28 合肥工业大学 River shoreline detection and autonomous cruising method for unmanned ship
CN115115595B (en) * 2022-06-30 2023-03-03 东北林业大学 Real-time calibration method of airborne laser radar and infrared camera for forest fire monitoring
CN114862973B (en) * 2022-07-11 2022-09-16 中铁电气化局集团有限公司 Space positioning method, device and equipment based on fixed point location and storage medium
CN115342814B (en) * 2022-07-26 2024-03-19 江苏科技大学 Unmanned ship positioning method based on multi-sensor data fusion
CN115187743B (en) * 2022-07-29 2024-07-05 江西科骏实业有限公司 Subway station internal environment arrangement prediction and white mode acquisition method and system
CN115049825B (en) * 2022-08-16 2022-11-01 北京大学 Water surface cleaning method, device, equipment and computer readable storage medium
CN115097442B (en) * 2022-08-24 2022-11-22 陕西欧卡电子智能科技有限公司 Water surface environment map construction method based on millimeter wave radar
CN115496923B (en) * 2022-09-14 2023-10-20 北京化工大学 Multi-mode fusion target detection method and device based on uncertainty perception
CN115641434B (en) * 2022-12-26 2023-04-14 浙江天铂云科光电股份有限公司 Power equipment positioning method, system, terminal and storage medium
CN116030023A (en) * 2023-02-02 2023-04-28 泉州装备制造研究所 Point cloud detection method and system
CN116524017B (en) * 2023-03-13 2023-09-19 明创慧远科技集团有限公司 Underground detection, identification and positioning system for mine
CN116106899B (en) * 2023-04-14 2023-06-23 青岛杰瑞工控技术有限公司 Port channel small target identification method based on machine learning
CN116338628B (en) * 2023-05-16 2023-09-15 中国地质大学(武汉) Laser radar sounding method and device based on learning architecture and electronic equipment
CN117788302B (en) * 2024-02-26 2024-05-14 山东全维地信科技有限公司 Mapping graphic processing system
CN117975769B (en) * 2024-03-29 2024-06-07 交通运输部水运科学研究所 Intelligent navigation safety management method and system based on multi-source data fusion
CN117994797B (en) * 2024-04-02 2024-06-21 杭州海康威视数字技术股份有限公司 Water gauge reading method and device, storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101486543B1 (en) * 2013-05-31 2015-01-26 한국과학기술원 Method and apparatus for recognition and segmentation object for 3d object recognition
CN106709568B (en) * 2016-12-16 2019-03-22 北京工业大学 The object detection and semantic segmentation method of RGB-D image based on deep layer convolutional network
CN106843209A (en) * 2017-01-10 2017-06-13 上海华测导航技术股份有限公司 A kind of unmanned ship based on control system of increasing income
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN108469817B (en) * 2018-03-09 2021-04-27 武汉理工大学 Unmanned ship obstacle avoidance control system based on FPGA and information fusion
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张丹 等: "《无人系统之"眼"——计算机视觉技术与应用浅析》", 《无人系统技术》 *

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705623B (en) * 2019-09-26 2022-08-02 哈尔滨工程大学 Sea-sky-line on-line detection method based on full convolution neural network
CN110705623A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Sea-sky-line on-line detection method based on full convolution neural network
CN110763306A (en) * 2019-09-30 2020-02-07 中国科学院西安光学精密机械研究所 Monocular vision-based liquid level measurement system and method
CN111144208A (en) * 2019-11-22 2020-05-12 北京航天控制仪器研究所 Automatic detection and identification method for marine vessel target and target detector
CN111179300A (en) * 2019-12-16 2020-05-19 新奇点企业管理集团有限公司 Method, apparatus, system, device and storage medium for obstacle detection
CN111354045A (en) * 2020-03-02 2020-06-30 清华大学 Visual semantic and position sensing method and system based on infrared thermal imaging
CN112567383A (en) * 2020-03-06 2021-03-26 深圳市大疆创新科技有限公司 Object detection method, movable platform, device and storage medium
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111899301A (en) * 2020-06-02 2020-11-06 广州中国科学院先进技术研究所 Workpiece 6D pose estimation method based on deep learning
CN111881932A (en) * 2020-06-11 2020-11-03 中国人民解放军战略支援部队信息工程大学 FasterRCNN target detection algorithm for military aircraft
CN111881932B (en) * 2020-06-11 2023-09-15 中国人民解放军战略支援部队信息工程大学 FasterRCNN target detection algorithm for military aircraft
CN114076937A (en) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 Laser radar and camera combined calibration method and device, server and computer readable storage medium
CN112101222A (en) * 2020-09-16 2020-12-18 中国海洋大学 Sea surface three-dimensional target detection method based on unmanned ship multi-mode sensor
CN112541886A (en) * 2020-11-27 2021-03-23 北京佳力诚义科技有限公司 Laser radar and camera fused artificial intelligence ore identification method and device
CN112529072A (en) * 2020-12-07 2021-03-19 中国船舶重工集团公司七五0试验场 Underwater buried object identification and positioning method based on sonar image processing
CN112652064A (en) * 2020-12-07 2021-04-13 中国自然资源航空物探遥感中心 Sea-land integrated three-dimensional model construction method and device, storage medium and electronic equipment
CN112652064B (en) * 2020-12-07 2024-02-23 中国自然资源航空物探遥感中心 Sea-land integrated three-dimensional model construction method and device, storage medium and electronic equipment
CN112666534A (en) * 2020-12-31 2021-04-16 武汉理工大学 Unmanned ship route planning method and device based on laser radar recognition algorithm
CN112733753B (en) * 2021-01-14 2024-04-30 江苏恒澄交科信息科技股份有限公司 Bridge azimuth recognition method and system combining convolutional neural network and data fusion
CN112733753A (en) * 2021-01-14 2021-04-30 江苏恒澄交科信息科技股份有限公司 Bridge orientation identification method and system combining convolutional neural network and data fusion
CN112927237A (en) * 2021-03-10 2021-06-08 太原理工大学 Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN113159042A (en) * 2021-03-30 2021-07-23 苏州市卫航智能技术有限公司 Laser vision fusion unmanned ship bridge opening passing method and system
CN113033572B (en) * 2021-04-23 2024-04-05 上海海事大学 Obstacle segmentation network based on USV and generation method thereof
CN113033572A (en) * 2021-04-23 2021-06-25 上海海事大学 Obstacle segmentation network based on USV and generation method thereof
CN113362395A (en) * 2021-06-15 2021-09-07 上海追势科技有限公司 Sensor fusion-based environment sensing method
CN113485375B (en) * 2021-08-13 2023-03-24 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113485375A (en) * 2021-08-13 2021-10-08 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113936198A (en) * 2021-11-22 2022-01-14 桂林电子科技大学 Low-beam laser radar and camera fusion method, storage medium and device
CN113936198B (en) * 2021-11-22 2024-03-22 桂林电子科技大学 Low-beam laser radar and camera fusion method, storage medium and device
CN114527468B (en) * 2021-12-28 2024-08-27 湖北三江航天红峰控制有限公司 Personnel detection system based on laser radar
CN114527468A (en) * 2021-12-28 2022-05-24 湖北三江航天红峰控制有限公司 Special scene personnel detection system based on laser radar
CN114332647A (en) * 2021-12-31 2022-04-12 合肥工业大学 River channel boundary detection and tracking method and system for unmanned ship
CN114692731A (en) * 2022-03-09 2022-07-01 华南理工大学 Environment perception fusion method and system based on monocular vision and laser ranging array
CN114692731B (en) * 2022-03-09 2024-05-28 华南理工大学 Environment perception fusion method and system based on monocular vision and laser ranging array
CN114863258A (en) * 2022-07-06 2022-08-05 四川迪晟新达类脑智能技术有限公司 Method for detecting small target based on visual angle conversion in sea-sky-line scene
CN115015911B (en) * 2022-08-03 2022-10-25 深圳安德空间技术有限公司 Method and system for manufacturing and using navigation map based on radar image
CN115015911A (en) * 2022-08-03 2022-09-06 深圳安德空间技术有限公司 Method and system for manufacturing and using navigation map based on radar image
CN116310999B (en) * 2023-05-05 2023-07-21 贵州中水能源股份有限公司 Method for detecting large floaters in reservoir area of hydroelectric power station
CN116310999A (en) * 2023-05-05 2023-06-23 贵州中水能源股份有限公司 Method for detecting large floaters in reservoir area of hydroelectric power station

Also Published As

Publication number Publication date
WO2020237693A1 (en) 2020-12-03
CN110188696B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110188696A (en) A kind of water surface is unmanned to equip multi-source cognitive method and system
WO2021142902A1 (en) Danet-based unmanned aerial vehicle coastline floating garbage inspection system
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN110956651B (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
CN106127204B (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN105405165B (en) Morphological analysis and forced landing extracted region analogue system in a kind of universal unmanned plane during flying
CN107690840B (en) Unmanned plane vision auxiliary navigation method and system
CN109598241A (en) Satellite image marine vessel recognition methods based on Faster R-CNN
CN109597087A (en) A kind of 3D object detection method based on point cloud data
CN109146889A (en) A kind of field boundary extracting method based on high-resolution remote sensing image
WO2021076914A1 (en) Geospatial object geometry extraction from imagery
CN109086668A (en) Based on the multiple dimensioned unmanned aerial vehicle remote sensing images road information extracting method for generating confrontation network
Xing et al. Multi-UAV cooperative system for search and rescue based on YOLOv5
CN110070025A (en) Objective detection system and method based on monocular image
CN110443201A (en) The target identification method merged based on the shape analysis of multi-source image joint with more attributes
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN114612769B (en) Integrated sensing infrared imaging ship detection method integrated with local structure information
CN107194343B (en) Traffic lights detection method based on the relevant convolution in position Yu Fire model
Zhang et al. Research on unmanned surface vehicles environment perception based on the fusion of vision and lidar
CN110060273A (en) Remote sensing image landslide plotting method based on deep neural network
CN113159042A (en) Laser vision fusion unmanned ship bridge opening passing method and system
CN109492606A (en) Multispectral vector picture capturing method and system, three dimensional monolithic method and system
CN114926739A (en) Unmanned collaborative acquisition and processing method for underwater and overwater geographic spatial information of inland waterway
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240906

Address after: 510641 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee after: SOUTH CHINA University OF TECHNOLOGY

Country or region after: China

Patentee after: Guangzhou Shipyard International Co.,Ltd.

Address before: 510640 five mountain road, Guangzhou, Guangdong, Tianhe District, South China University of Technology

Patentee before: SOUTH CHINA University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right