CN113933828A - Unmanned ship environment self-adaptive multi-scale target detection method and system - Google Patents

Unmanned ship environment self-adaptive multi-scale target detection method and system Download PDF

Info

Publication number
CN113933828A
CN113933828A CN202111213649.8A CN202111213649A CN113933828A CN 113933828 A CN113933828 A CN 113933828A CN 202111213649 A CN202111213649 A CN 202111213649A CN 113933828 A CN113933828 A CN 113933828A
Authority
CN
China
Prior art keywords
radar
image
target
information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111213649.8A
Other languages
Chinese (zh)
Inventor
周洋
王名洺
李小毛
张珂维
彭艳
罗均
谢少荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202111213649.8A priority Critical patent/CN113933828A/en
Publication of CN113933828A publication Critical patent/CN113933828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/937Radar or analogous systems specially adapted for specific applications for anti-collision purposes of marine craft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ocean & Marine Engineering (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for detecting an unmanned ship environment self-adaptive multi-scale target, which relate to the technical field of unmanned ships and multi-sensor fusion, and comprise the following steps: preprocessing the acquired radar data to obtain a radar estimated target set; the radar data comprises data collected by the millimeter wave radar and data collected by the marine radar; preprocessing an original environment image acquired by a vision sensor to obtain a global atmospheric light value and a transmittance; restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image; obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target; and carrying out fusion processing on the radar estimated target set, the enhanced image and the primary detection result, and determining pose information, category information and confidence coefficient of the sea surface target. The method and the device can accurately determine the sea surface target information.

Description

Unmanned ship environment self-adaptive multi-scale target detection method and system
Technical Field
The invention relates to the technical field of unmanned ships and multi-sensor fusion, in particular to an unmanned ship environment self-adaptive multi-scale target detection method and system.
Background
Unmanned surface vessels are used primarily to perform tasks that are dangerous and not suitable for manned vessels. Once equipped with advanced control systems, sensor systems, communication systems, and weaponry systems, surface drones can perform a variety of war and non-war military missions. In addition, the unmanned surface vehicle can perform tasks such as search and rescue, navigation and hydrogeographic prospecting in the civil aspect. Particularly, in the military aspect, the unmanned surface vehicle (hereinafter referred to as unmanned vehicle) can be used for flexible operation, maneuvering deployment and convenient use, can be used for performing tasks along with the sailing of a fighting ship, and can independently and autonomously perform tasks in dangerous areas or areas where the unmanned surface vehicle is not suitable for dispatching the manned ship.
The sensors that can be used on traditional unmanned boats include marine radar, laser radar, vision sensors, millimeter wave radar, and the like. When the unmanned ship executes a task, only a single sensor is equipped according to the task requirement, and the data of the single sensor is used; or multiple sensors may be provided to independently process and use the data from the different sensors. However, since the weather on the sea is variable, the weather is easy to encounter rain, fog and the like, different sensors have respective advantages and disadvantages, the sensors have respective adaptability to different environments, and in an unsuitable scene, a single sensor is used for detection, and a result has a small error. For example, the visual sensor has low cost, can provide visual images, and provides detailed target category and relative position information by combining various existing detection algorithms, but is very easily influenced by rain, fog and weather; the millimeter wave radar has low cost, can provide more accurate two-dimensional position and speed information, is not easily influenced by rain and fog weather, but has the defect of small detection range; the marine radar is low in cost, not easy to be affected by weather, wide in detection range and capable of providing target position and speed information, but large in error.
The existing multi-sensor fusion technology applied to the unmanned ship focuses more on the fusion of homogeneous sensors, so that the detection precision is improved, but the information and the detection distance which can be obtained by a single sensor are limited, and the multi-sensor fusion technology cannot be well adapted to most complex scenes; the data fusion of the vision sensor and the laser radar is also tried to be applied to the emergency obstacle avoidance task of the unmanned ship, so that good effects are obtained, but the detection distance of the laser radar is limited by the strong dependence of the environment, and more application scenes are lacked.
Disclosure of Invention
The invention aims to provide a self-adaptive multi-scale target detection method and system for an unmanned ship environment so as to achieve the aim of accurately detecting a sea surface target.
In order to achieve the purpose, the invention provides the following scheme:
an unmanned ship environment self-adaptive multi-scale target detection method comprises the following steps:
acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by a millimeter wave radar and data collected by a navigation radar;
acquiring an original environment image acquired by a visual sensor, and preprocessing the original environment image to obtain a global atmospheric light value and a transmittance;
restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image;
obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target;
and fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
Optionally, the preprocessing the radar data specifically includes:
processing the radar data by adopting a density-based clustering algorithm to obtain N clusters;
performing clutter filtering and repeated target removing operation on the once processed radar data to obtain secondarily processed radar data; the radar data after the primary processing comprises N clusters, the radar data after the secondary processing comprises M clusters, and N is greater than or equal to M; the cluster represents radar estimated target information;
and constructing a radar estimated target set based on the radar data after the secondary processing.
Optionally, the preprocessing the original environment image to obtain a global atmospheric light value and a transmittance specifically includes:
converting the original environment image to obtain a gray level image;
calculating gradient information of the gray level image, and dividing the original environment image based on the gradient information to obtain a sky area and a non-sky area;
and calculating the global atmospheric light value of the sky region, the transmissivity of the sky region and the transmissivity of the non-sky region by adopting a dark channel preoperative algorithm based on rapid guided filtering.
Optionally, the dark channel prior-inspection algorithm based on fast-oriented filtering is adopted to calculate the global atmospheric light value of the sky region, the transmittance of the sky region, and the transmittance of the non-sky region, and specifically includes:
calculating a dark channel image of the original environment image according to a dark channel prior theory;
screening pixels of which the brightness values are larger than a set threshold value in the dark channel image in the sky area;
determining and arranging the gray value of each pixel, and then taking the maximum gray value as the global atmospheric light value of the sky area;
and calculating the transmissivity of the sky area by adopting a fast guiding filtering algorithm based on the dark channel image and the global atmospheric light value.
Optionally, the fusing the radar estimated target set, the enhanced image, and the preliminary detection result to determine pose information, category information, and confidence of the sea surface target specifically includes:
drawing a radar chart based on data in the radar estimated target set;
performing space synchronization processing on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates so as to obtain pose information of sea surface targets; the pose information comprises position information and speed information; wherein the speed information is determined from the radar chart; the position information is used for world coordinate representation;
calculating the distance between the sea surface target and the unmanned ship based on the position information;
calculating self-adaptive parameters of the visual sensor according to the distance, the transmissivity of the sky area in the enhanced image and the confidence coefficient of the image estimation target;
calculating self-adaptive parameters of the radar according to the distance; the radar comprises a millimeter wave radar and a navigation radar;
calculating the confidence coefficient of a sea surface target according to the adaptive parameters of the visual sensor and the adaptive parameters of the radar;
and determining the category information of the image estimation target as the category information of the sea surface target.
Optionally, the performing spatial synchronization processing on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates, so as to obtain pose information of a sea surface target specifically includes:
mapping the enhanced image to the radar chart to obtain a comprehensive image;
establishing a world coordinate system by taking the positive center of the initial position of the unmanned ship as an origin;
and converting the pixel coordinates on the comprehensive image into the world coordinate system to realize the mapping from the image coordinates to the world coordinates so as to obtain the pose information of the sea surface target.
An unmanned ship environment adaptive multi-scale target detection system, comprising: the device comprises a control calculation component, and a millimeter wave radar, a navigation radar and a vision sensor which are connected with the control calculation component;
wherein the control calculation component is to:
acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by a millimeter wave radar and data collected by a navigation radar;
acquiring an original environment image acquired by a visual sensor, and preprocessing the original environment image to obtain a global atmospheric light value and a transmittance;
restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image;
obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target;
and fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
Optionally, the marine radar is mounted on the uppermost of the unmanned vehicle.
Optionally, the number of the millimeter wave radars is three, and the three millimeter wave radars are all installed at the bow of the unmanned ship; the detection ranges of the three millimeter wave radars are not coincident and cover the range of 180 degrees ahead.
Optionally, the number of the visual sensors is three, and the three visual sensors are installed under the marine radar; wherein the fields of view of the three vision sensors are not coincident and cover a 180 DEG range in front.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
by the technical scheme provided by the invention, the advantages of different sensors can be fully exerted, and under the common sunny, rainy and foggy weather, the unmanned ship can more accurately judge the pose information and the category information of sea surface targets with different distance scales in a certain range, so that a better target detection result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of radar data preprocessing of the present invention
FIG. 2 is a schematic diagram of the original environment image and the image result of the rain and fog removal by the dark channel prior theory according to the present invention; (a) is an original environment image; (b) enhancing the image;
FIG. 3 is a flow chart of the fusion algorithm of the present invention
FIG. 4 is a flow chart of a method for detecting an unmanned surface vehicle environment adaptive multi-scale target according to the present invention;
fig. 5 is a structural diagram of an unmanned ship environment adaptive multi-scale target detection system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Up to now, the method for applying the data fusion of the millimeter wave radar, the marine radar and the vision sensor to the detection of the marine target is not complete, and most of the methods have the problems of low precision, poor robustness, poor environmental adaptability and the like. The multi-sensor information fusion technology is expected to overcome the defect of a single sensor, mutual complementary information is fully mined, the reliability of offshore target detection is improved, and obviously the multi-sensor information fusion technology is the mainstream of technical development.
In order to solve the problem that different sensors on the unmanned surface vehicle are susceptible to environment and detection scale to influence the detection result, the embodiment provides an unmanned surface vehicle environment self-adaptive multi-scale target detection method based on multi-sensor fusion after image preprocessing. The method comprises the steps of firstly acquiring data collected by a millimeter wave radar, a navigation radar and a visual sensor (such as a visible light camera), and secondly filtering clutter from the radar data by a clustering method; then, obtaining the transmittance and the global environment light (also called as a global atmospheric light value) according to an atmospheric scattering model and a dark channel prior method based on rapid guided filtering; restoring an image which influences observation due to rain and fog weather into a rain and fog removing image according to the transmissivity and the global environment light, and enhancing the observation effect of the image so as to improve the accuracy of a target detection algorithm; inputting the enhanced image (namely the rain and fog removing image) into a pre-trained target detection neural network to obtain a preliminary detection effect (comprising confidence, a detection frame and a target category); finally, after the processing is finished, inputting the preprocessed radar data, the enhanced image and the target detection algorithm result into a fusion module at the same time, determining the weight of distribution and fusion of a plurality of heterogeneous sensors according to the transmissivity and the estimated distance of the target, and obtaining the final pose information and category information of the target; the method comprises the following specific steps:
the method comprises the following steps: radar data are obtained and preprocessed: and summarizing the data acquired by the marine radar and the data acquired by the millimeter wave radar to form radar data, and processing the radar data by using a density-based clustering algorithm.
Installing a navigation radar at the top of the unmanned ship; the three millimeter wave radars are arranged at the bow of the unmanned ship, and the detection ranges of the three millimeter wave radars are not coincident and cover the range of 180 degrees in front; the three visible light cameras are arranged right below the marine radar, and the visual fields of the three visible light cameras are not overlapped and cover the range of 180 degrees in front.
Taking the right center of the unmanned ship at the initial position as the origin of a world coordinate system, and obtaining an external parameter matrix of each sensor according to the installation position of the sensor; and converting the data acquired by the marine radar and the data acquired by the millimeter wave radar into the same coordinate system based on the external reference matrix.
Wherein the external reference matrix is a product of the rotation matrix and the translation matrix; the rotation matrix comprises a yaw angle of the sensor relative to the bow position of the unmanned boat, a pitch angle and a roll angle of a sensor installation plane relative to the center plane of the unmanned boat, and the translation matrix comprises an offset of the sensor relative to the center of the unmanned boat.
Referring to fig. 1, the first step mainly includes:
firstly, summarizing data of the marine radar and data of the millimeter wave radar based on the external parameter matrix to obtain radar data RrawAnd using a density-based clustering algorithm to the radar data RrawProcessing (traversing all points, calculating Euclidean distances from all points to respective adjacent points, if the Euclidean distances are smaller than a set threshold value, determining the Euclidean distances as a clustered cluster, and repeating the cycle until all the points have self-belonged clusters, wherein the threshold value of a clustering algorithm is set as the measurement error of a sensor, and the adjacent points are defined as points with a certain point as the center of a circle and a radius as the range of the threshold value); secondly, finding clusters without radar tracking identification and with only one point in all clusters, identifying the clusters as clutter to be screened out, carrying out weighted average on the points in the clusters for other clusters, and finally obtaining a radar estimated target set R after clutter and repeated targets are filtered outtarget
The method has the advantages that the advantages of the respective detection ranges of the millimeter wave radar and the marine radar can be fully utilized to complement each other on the distance scale, and meanwhile, more accurate clutter filtering and target detection can be realized for the overlapped area of the millimeter wave radar and the marine radar.
Step two: an original environment image collected by a vision sensor (in this embodiment, a visible light camera) is obtained and preprocessed to obtain a global atmospheric light value and transmittance.
The second step specifically comprises:
firstly, converting an original environment image to obtain a gray level image; secondly, edge detection is carried out on the gray level image by using a Canny operator, gradient information of the gray level image is calculated, and the original image is divided into a sky area and a non-sky area according to the gradient information of the gray level image; then, calculating a dark channel image of the original image according to a dark channel prior theory; then searching pixels 0.1% of the brightness in the dark channel image in the sky area of the original image, and taking the maximum gray value of the pixels as the atmospheric light value of the sky area, and taking the maximum gray value as the global atmospheric light value; finally, after obtaining the dark channel image and the global atmospheric light value A, calculating the transmissivity t of the sky area and the non-sky area by a fast guiding filtering methods,to. Wherein, the transmissivity of the sky area is used as the weight of the subsequent data fusion.
The transmittance calculation formula is as follows:
Figure BDA0003309794260000071
wherein, IcImage representing a channel in the original ambient image R/G/B, AcIs the corresponding global atmospheric light value of the channel, ω is the adjustment coefficient; the adjustment coefficient needs to be selected according to the actual field, and in this embodiment, the sky area adjustment coefficient ω is 0.3 and the non-sky area adjustment coefficient ω is 0.8.
The method has the advantages that different environments are distinguished through the calculated atmospheric transmittance, so that the information confidence degrees of different sensors are determined as weights, and the advantages of the different sensors in different environments are better exerted.
Step three: according to the global atmospheric light value of the sky area, the transmissivity of the sky area and the transmission of the non-sky areaRestoring the original environment image to obtain a rain-removing and defogging image, enhancing the image observation effect, and detecting the enhanced image (the rain-removing and defogging image) by using a pre-trained target detection model to obtain a primary detection effect (comprising a target confidence coefficient and a detection frame x)i,yiW, h and target class label information). The formula for restoring the original environment image is as follows:
Figure BDA0003309794260000081
wherein J (x) represents the restored image, i.e. the image for removing rain and fog, and I (x) represents the original environment image.
The comparison effect of the original environment image and the rain and fog removing image is shown in fig. 2.
Step four: and drawing the preprocessed radar data into a radar image, and then carrying out spatial synchronization with image data acquired and processed by a visible light camera to complete mapping from image coordinates to world coordinates.
The fourth step specifically comprises:
firstly, estimating a target set R of the radar according to respective set scales of the millimeter wave radar and the marine radartargetDrawing the data in the step (A) into a radar chart Iradar(ii) a Secondly, the image I is enhancedca scratching deviceMapping to radar chart IradarThe above step (1); wherein the image coordinates (u)c,vc) And radar chart coordinates (u)r,vr) Is in a mapping relationship of
Figure BDA0003309794260000082
Figure BDA0003309794260000083
Is a conversion matrix measured according to the installation position in the previous combined calibration; then, a world coordinate system is established by taking the right center of the initial position of the unmanned ship as an origin, the position of the target is (x, y), and a conversion matrix from pixel coordinates on the radar chart to world coordinates is defined as
Figure BDA0003309794260000084
Where L is the width of the radar chart and R is the detection range of the radar. In this embodiment, L is 1024.
In the detection range of the overlapped millimeter wave radar and the navigation radar, data are subjected to preliminary fusion, and the data after preprocessing are only adopted outside the detection range of the millimeter wave radar and within the detection distance of the navigation radar. In the invention, the detection range of the marine radar is not more than 10 times of the detection range of the millimeter wave radar.
The method has the advantages that the distance (depth) information can be obtained by the target in the image, and richer target information can be contained in the finally output message for subsequent decision making.
Step five: data fusion: converting the image coordinates into world coordinates (x, y), calculating the Euclidean distance between the target and the unmanned ship through the world coordinates, obtaining the self-adaptive parameters of the camera and the radar according to the relation among the Euclidean distance, the transmissivity and the confidence coefficient, and finally outputting the pose information, the category information and the confidence coefficient of the target after weighting.
Referring to fig. 3, step five mainly includes:
firstly, the transformation matrix of radar coordinates and world coordinates can be obtained by the step four
Figure BDA0003309794260000091
Conversion matrix of image coordinates and radar coordinates
Figure BDA0003309794260000092
Therefore, the conversion relationship from the image coordinate to the world coordinate is
Figure BDA0003309794260000093
Finally normalized to
Figure BDA0003309794260000094
The actual world coordinates of the target can be obtained
Figure BDA0003309794260000095
I.e. position information of the target, wherein the speed information of the target is a set R of estimated targets from radartargetIs determined by the target speed information in (1).
Secondly, calculating the Euclidean distance d between the target and the unmanned ship through world coordinates, and obtaining the self-adaptive parameter gamma of the camera according to the Euclidean distance d, the transmissivity of the sky region and the confidence coefficient of the sky regioncCalculating an adaptive parameter gamma of the radar based on the Euclidean distancer. Wherein the adaptive parameter
Figure BDA0003309794260000096
Adaptive parameters
Figure BDA0003309794260000097
And then, determining the category information of the target according to the category information in the preliminary test result.
And finally, according to the self-adaptive parameters of the camera and the radar, the confidence coefficient of the target is output after weighting. Wherein the confidence of the target output by the final system is C ═ gammac·Wcr·Wr(ii) a The weight of the camera is defined as WcThe weight of radar is defined as W1r1 and when there is only a target in the radar chart and there is no target in the original environment image, WcSet to 0, otherwise WrIs set to 0.
In summary, the system outputs the information of the detection frame xi,yiW, h, target category label, target confidence C and radar estimated target set R corresponding to the targettargetTarget speed information, world coordinates
Figure BDA0003309794260000101
Example two
In order to achieve the above object, this embodiment provides an adaptive multi-scale target detection method for an unmanned surface vehicle environment, please refer to fig. 4, which includes:
step 401: acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by the millimeter wave radar and data collected by the marine radar.
Step 402: the method comprises the steps of obtaining an original environment image collected by a vision sensor, and preprocessing the original environment image to obtain a global atmospheric light value and transmittance.
Step 403: and restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image.
Step 404: obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target.
Step 405: and fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
Wherein, the preprocessing the radar data includes:
and processing the radar data by adopting a density-based clustering algorithm to obtain N clusters.
Performing clutter filtering and repeated target removing operation on the once processed radar data to obtain secondarily processed radar data; the radar data after the primary processing comprises N clusters, the radar data after the secondary processing comprises M clusters, and N is greater than or equal to M; the clusters represent radar predicted target information.
And constructing a radar estimated target set based on the radar data after the secondary processing.
Preprocessing the original environment image to obtain a global atmospheric light value and a transmittance, specifically comprising:
and converting the original environment image to obtain a gray level image.
And calculating gradient information of the gray level image, and dividing the original environment image based on the gradient information to obtain a sky area and a non-sky area.
And calculating the global atmospheric light value of the sky region and the transmissivity of the sky region by adopting a dark channel prior inspection algorithm based on rapid guided filtering.
Step the dark channel prior verification algorithm based on fast guided filtering is adopted to calculate the global atmospheric light value of the sky area, the transmissivity of the sky area and the transmissivity of the non-sky area, and the method specifically comprises the following steps:
and calculating a dark channel image of the original environment image according to a dark channel prior theory.
And screening pixels of which the brightness values are larger than a set threshold value in the dark channel image in the sky area.
Determining and arranging the gray value of each pixel, and then taking the maximum gray value as the global atmospheric light value of the sky area.
And calculating the transmissivity of the sky area by adopting a fast guiding filtering algorithm based on the dark channel image and the global atmospheric light value.
Fusing the radar estimated target set, the enhanced image and the preliminary detection result to determine pose information, category information and confidence coefficient of the sea surface target, and specifically comprising the following steps:
and drawing a radar chart based on the data in the radar predicted target set.
Performing space synchronization processing on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates so as to obtain pose information of sea surface targets; the pose information comprises position information and speed information; wherein the speed information is determined from the radar chart; the location information is used for world coordinate representation.
And calculating the distance between the sea surface target and the unmanned ship based on the position information.
And calculating the self-adaptive parameters of the visual sensor according to the distance, the transmissivity of the sky area in the enhanced image and the confidence coefficient of the image estimation target.
Calculating self-adaptive parameters of the radar according to the distance; the radar includes a millimeter wave radar and a marine radar.
And calculating the confidence coefficient of the sea surface target according to the adaptive parameters of the visual sensor and the adaptive parameters of the radar.
And determining the category information of the image estimation target as the category information of the sea surface target.
The step of performing spatial synchronization processing on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates so as to obtain pose information of sea surface targets specifically comprises the following steps:
and mapping the enhanced image to the radar chart to obtain a comprehensive image.
And establishing a world coordinate system by taking the positive center of the initial position of the unmanned ship as an origin.
And converting the pixel coordinates on the comprehensive image into the world coordinate system to realize the mapping from the image coordinates to the world coordinates so as to obtain the pose information of the sea surface target.
EXAMPLE III
As shown in fig. 5, the present embodiment provides an unmanned ship environment adaptive multi-scale target detection system, including: the device comprises a control calculation component, and a millimeter wave radar, a navigation radar and a vision sensor which are connected with the control calculation component;
wherein the control calculation component is to:
acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by the millimeter wave radar and data collected by the marine radar.
The method comprises the steps of obtaining an original environment image collected by a vision sensor, and preprocessing the original environment image to obtain a global atmospheric light value and transmittance.
And restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image.
Obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target.
And fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
Wherein, navigation radar installs the top at unmanned ship.
The number of the millimeter wave radars is three, and the three millimeter wave radars are all arranged at the bow of the unmanned ship; the detection ranges of the three millimeter wave radars are not coincident and cover the range of 180 degrees ahead.
The number of the vision sensors is three, and the three vision sensors are arranged right below the marine radar; wherein the fields of view of the three vision sensors are not coincident and cover a 180 DEG range in front.
The invention discloses an unmanned ship environment self-adaptive multi-scale target detection method and system based on multi-sensor fusion after image preprocessing, which mainly comprise the following steps: collecting environmental data using a plurality of heterogeneous sensors; respectively preprocessing environment data obtained by different sensors, and screening out possible targets; processing the image shot by the camera according to the atmospheric scattering model and the dark channel prior theory to obtain atmospheric transmittance; and distributing different fusion weights to different sensors according to the transmissivity and the target distance estimated by the radar, and fusing the output results of the different sensors according to the fusion weights to obtain the pose information and the category information of the target.
Compared with the prior art, the invention has the following positive effects:
by the technical scheme provided by the invention, the advantages of different sensors can be fully exerted, and under the common sunny, rainy and foggy weather, the unmanned ship can more accurately judge the target pose information and the category information of different distance scales in a certain range, so that a better target detection result is obtained.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An unmanned ship environment self-adaptive multi-scale target detection method is characterized by comprising the following steps:
acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by a millimeter wave radar and data collected by a navigation radar;
acquiring an original environment image acquired by a visual sensor, and preprocessing the original environment image to obtain a global atmospheric light value and a transmittance;
restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image;
obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target;
and fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
2. The unmanned under the ship environment adaptive multi-scale target detection method of claim 1, wherein the preprocessing the radar data specifically comprises:
processing the radar data by adopting a density-based clustering algorithm to obtain N clusters;
performing clutter filtering and repeated target removing operation on the once processed radar data to obtain secondarily processed radar data; the radar data after the primary processing comprises N clusters, the radar data after the secondary processing comprises M clusters, and N is greater than or equal to M; the cluster represents radar estimated target information;
and constructing a radar estimated target set based on the radar data after the secondary processing.
3. The unmanned under boat environment adaptive multi-scale target detection method according to claim 1, wherein the preprocessing the original environment image to obtain a global atmospheric light value and transmittance specifically comprises:
converting the original environment image to obtain a gray level image;
calculating gradient information of the gray level image, and dividing the original environment image based on the gradient information to obtain a sky area and a non-sky area;
and calculating the global atmospheric light value of the sky region, the transmissivity of the sky region and the transmissivity of the non-sky region by adopting a dark channel preoperative algorithm based on rapid guided filtering.
4. The unmanned under-the-boat environment adaptive multi-scale target detection method of claim 3, wherein the computing the global atmospheric light value of the sky region, the transmittance of the sky region and the transmittance of the non-sky region by using a fast guided filtering based dark channel prior inspection algorithm specifically comprises:
calculating a dark channel image of the original environment image according to a dark channel prior theory;
screening pixels of which the brightness values are larger than a set threshold value in the dark channel image in the sky area;
determining and arranging the gray value of each pixel, and then taking the maximum gray value as the global atmospheric light value of the sky area;
and calculating the transmissivity of the sky area by adopting a fast guiding filtering algorithm based on the dark channel image and the global atmospheric light value.
5. The unmanned ship environment adaptive multi-scale target detection method according to claim 1, wherein the fusion processing is performed on the radar estimated target set, the enhanced image and the preliminary detection result to determine pose information, category information and confidence of sea surface targets, specifically comprising:
drawing a radar chart based on data in the radar estimated target set;
performing space synchronization processing on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates so as to obtain pose information of sea surface targets; the pose information comprises position information and speed information; wherein the speed information is determined from the radar chart; the position information is used for world coordinate representation;
calculating the distance between the sea surface target and the unmanned ship based on the position information;
calculating self-adaptive parameters of the visual sensor according to the distance, the transmissivity of the sky area in the enhanced image and the confidence coefficient of the image estimation target;
calculating self-adaptive parameters of the radar according to the distance; the radar comprises a millimeter wave radar and a navigation radar;
calculating the confidence coefficient of a sea surface target according to the adaptive parameters of the visual sensor and the adaptive parameters of the radar;
and determining the category information of the image estimation target as the category information of the sea surface target.
6. The unmanned ship environment adaptive multi-scale target detection method according to claim 5, wherein the spatial synchronization processing is performed on the enhanced image and the radar chart to realize mapping from image coordinates to world coordinates so as to obtain pose information of sea surface targets, specifically comprising:
mapping the enhanced image to the radar chart to obtain a comprehensive image;
establishing a world coordinate system by taking the positive center of the initial position of the unmanned ship as an origin;
and converting the pixel coordinates on the comprehensive image into the world coordinate system to realize the mapping from the image coordinates to the world coordinates so as to obtain the pose information of the sea surface target.
7. An unmanned ship environment self-adaptive multi-scale target detection system is characterized by comprising: the device comprises a control calculation component, and a millimeter wave radar, a navigation radar and a vision sensor which are connected with the control calculation component;
wherein the control calculation component is to:
acquiring radar data and preprocessing the radar data to obtain a radar estimated target set; the radar data comprises data collected by a millimeter wave radar and data collected by a navigation radar;
acquiring an original environment image acquired by a visual sensor, and preprocessing the original environment image to obtain a global atmospheric light value and a transmittance;
restoring the original environment image according to the global atmospheric light value and the transmissivity to obtain an enhanced image;
obtaining a preliminary detection result according to the enhanced image and the pre-trained target detection model; the preliminary detection result comprises the confidence coefficient, the detection frame and the category information of the image estimation target;
and fusing the radar estimated target set, the enhanced image and the preliminary detection result, and determining pose information, category information and confidence of the sea surface target.
8. The unmanned marine environment adaptive multi-scale target detection system of claim 7, wherein the marine radar is installed at the uppermost of the unmanned marine.
9. The unmanned ship environment adaptive multi-scale target detection system of claim 7, wherein the number of the millimeter wave radars is three, and all three millimeter wave radars are installed at the bow of the unmanned ship; the detection ranges of the three millimeter wave radars are not coincident and cover the range of 180 degrees ahead.
10. The unmanned marine environment adaptive multi-scale object detection system of claim 8, wherein the number of the visual sensors is three, and three of the visual sensors are installed directly below the marine radar; wherein the fields of view of the three vision sensors are not coincident and cover a 180 DEG range in front.
CN202111213649.8A 2021-10-19 2021-10-19 Unmanned ship environment self-adaptive multi-scale target detection method and system Pending CN113933828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111213649.8A CN113933828A (en) 2021-10-19 2021-10-19 Unmanned ship environment self-adaptive multi-scale target detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111213649.8A CN113933828A (en) 2021-10-19 2021-10-19 Unmanned ship environment self-adaptive multi-scale target detection method and system

Publications (1)

Publication Number Publication Date
CN113933828A true CN113933828A (en) 2022-01-14

Family

ID=79280178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111213649.8A Pending CN113933828A (en) 2021-10-19 2021-10-19 Unmanned ship environment self-adaptive multi-scale target detection method and system

Country Status (1)

Country Link
CN (1) CN113933828A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311288A (en) * 2022-10-12 2022-11-08 江苏魔视智能科技有限公司 Method for detecting damage of automobile film
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311288A (en) * 2022-10-12 2022-11-08 江苏魔视智能科技有限公司 Method for detecting damage of automobile film
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system

Similar Documents

Publication Publication Date Title
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
KR20220155559A (en) Autonomous navigation method using image segmentation
US20220024549A1 (en) System and method for measuring the distance to an object in water
CN113627473B (en) Multi-mode sensor-based water surface unmanned ship environment information fusion sensing method
US9031285B2 (en) Detection of floating objects in maritime video using a mobile camera
KR102466804B1 (en) Autonomous navigation method using image segmentation
CN113933828A (en) Unmanned ship environment self-adaptive multi-scale target detection method and system
CN110472500A (en) A kind of water surface sensation target fast algorithm of detecting based on high speed unmanned boat
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
Zhang et al. A object detection and tracking method for security in intelligence of unmanned surface vehicles
CN115546741A (en) Binocular vision and laser radar unmanned ship marine environment obstacle identification method
CN112487912A (en) Arbitrary-direction ship detection method based on improved YOLOv3
CN113687349A (en) Unmanned ship sea surface target tracking method and device based on multi-sensor fusion
Jindal et al. Bollard segmentation and position estimation from lidar point cloud for autonomous mooring
Li et al. Vision-based target detection and positioning approach for underwater robots
Shi et al. Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles
CN115639536B (en) Unmanned ship perception target detection method and device based on multi-sensor fusion
Mu et al. Surface navigation target detection and recognition based on SSD
CN113484864B (en) Unmanned ship-oriented navigation radar and photoelectric pod collaborative environment sensing method
CN107941220B (en) Unmanned ship sea antenna detection and navigation method and system based on vision
CN114445572A (en) Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area
CN113537397A (en) Target detection and image definition joint learning method based on multi-scale feature fusion
CN110895680A (en) Unmanned ship water surface target detection method based on regional suggestion network
Rui et al. Real-Time obstacle detection based on monocular vision for unmanned surface vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination