CN112394726B - Unmanned ship obstacle fusion detection method based on evidence theory - Google Patents

Unmanned ship obstacle fusion detection method based on evidence theory Download PDF

Info

Publication number
CN112394726B
CN112394726B CN202011122117.9A CN202011122117A CN112394726B CN 112394726 B CN112394726 B CN 112394726B CN 202011122117 A CN202011122117 A CN 202011122117A CN 112394726 B CN112394726 B CN 112394726B
Authority
CN
China
Prior art keywords
detection
grid
unmanned ship
probability distribution
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011122117.9A
Other languages
Chinese (zh)
Other versions
CN112394726A (en
Inventor
刘德庆
张�杰
金久才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Institute of Oceanography MNR
Original Assignee
First Institute of Oceanography MNR
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Institute of Oceanography MNR filed Critical First Institute of Oceanography MNR
Priority to CN202011122117.9A priority Critical patent/CN112394726B/en
Publication of CN112394726A publication Critical patent/CN112394726A/en
Application granted granted Critical
Publication of CN112394726B publication Critical patent/CN112394726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an unmanned ship obstacle fusion detection method based on an evidence theory, which comprises the following steps: constructing a grid map of an area where the unmanned ship is located by taking the position of the unmanned ship as the center, and setting an identification frame of the grid; taking detection data of n sensors in the unmanned ship detection system as evidence for judging the grid barrier attribute; determining a basic probability distribution function m of each evidence; and judging the barrier properties of each grid according to the detection data of the n sensors and the basic probability distribution function to obtain the barrier condition of the area where the unmanned ship is located. The detection method provided by the invention combines the characteristics of complementarity among a plurality of detection means, adopts the multi-source heterogeneous detection data of n different types of sensors to realize the fusion detection of the unmanned ship obstacle, fully exerts the complementarity among different obstacle detection sensors, enriches the obstacle detection information of the unmanned ship, effectively reduces the false detection and omission detection of a single sensor, and greatly improves the reliability of the unmanned ship obstacle detection.

Description

Unmanned ship obstacle fusion detection method based on evidence theory
Technical Field
The invention relates to the technical field of unmanned ship environment perception, in particular to an unmanned ship obstacle fusion detection method based on an evidence theory.
Background
Unmanned ship is used as a novel autonomous water surface moving platform, and has more and more important functions in the fields of ocean mapping, environment monitoring, offshore target monitoring and the like. However, the marine environment is complex, and the navigation safety of unmanned ships is seriously endangered due to the influence of barriers such as ships, reefs and the like and sea waves, sea fog and other factors.
Good perceptibility is an important guarantee that unmanned ships perform given tasks safely and autonomously. Currently, unmanned ship-oriented obstacle detection means include: monocular vision, binocular vision, lidar, millimeter wave radar, marine radar, and the like. The monocular vision has the advantages that the form and color information of the target can be acquired, but the monocular vision is easily influenced by the ambient lighting conditions, and the ranging capability is insufficient. Binocular vision has the advantage over monocular vision that distance and bearing information of the target can be measured. The laser radar has high ranging precision, is suitable for short-distance target detection, and is sensitive to influence factors such as unmanned ship platform shake. The millimeter wave radar has higher ranging precision and stronger all-weather detection capability in the aspect of short-distance obstacle target detection, but has smaller field of view. The marine radar is mainly used for detecting long-distance targets at sea, has a certain blind area at a short distance, and has limited capability of detecting small targets. In summary, different obstacle detection means have advantages and disadvantages, and complementarity exists among the obstacle detection means, including complementation of sensing distance, complementation of sensing target information, complementation of day and night working conditions and the like, but a single obstacle detection means is difficult to meet the autonomous marine collision avoidance requirement of an unmanned ship. Currently, development of unmanned ship obstacle detection research based on multiple sensors is a trend, and how to realize heterogeneous data fusion among the multiple sensors is a key problem.
Disclosure of Invention
In view of the above, the invention provides an unmanned ship obstacle fusion detection method based on an evidence theory, so as to solve the problems that a single detection means in the prior art has limitations and is difficult to meet the offshore autonomous collision avoidance requirement of an unmanned ship.
In order to achieve the purpose, the technical scheme of the unmanned ship obstacle fusion detection method based on the evidence theory provided by the invention is as follows:
an unmanned ship obstacle fusion detection method based on evidence theory, wherein a sensor detection system is arranged on the unmanned ship, the sensor detection system comprises n different types of sensors for detecting obstacles, and n is an integer greater than or equal to 2, and the method comprises the following steps:
constructing a grid map of an area where the unmanned ship is located by taking the position of the unmanned ship as the center, and setting an identification frame theta of the grid as { O, P }, wherein O, P respectively represents an obstacle area and a passable area;
acquiring detection data of n sensors in the sensor detection system as evidence for judging the grid barrier attribute (O or P);
determining a basic probability distribution function m of each evidence, wherein the basic probability distribution function m satisfies the following formula:
wherein A is a subset in the recognition framework Θ;
and judging the barrier attribute of each grid according to the detection data of the n sensors and the basic probability distribution function of the n sensors, so as to obtain the barrier condition of the area where the unmanned ship is located.
Preferably, the method for acquiring the basic probability distribution function m of n sensors includes the following steps:
dividing the grid map into areas R according to the detection range of each sensor 1 ,…,R k
Determining probability distribution of obstacle attribute of each sensor in the corresponding area according to detection accuracy of each sensor in the corresponding area;
obtaining a basic probability distribution function m of each sensor according to probability distribution of obstacle attribute of each sensor in each area 1 ,…,m n
Preferably, determining the obstacle attribute of each grid from the detection data of the n sensors and the basic probability distribution function of the n sensors includes the steps of:
according to evidence information provided by detection data of n sensors, determining a basic probability distribution function m corresponding to each evidence in the grid 1 ,…,m n
Calculating to obtain combined basic probability distribution function values m (O) and m (P) in the grid according to the basic probability distribution functions of the n sensors;
judging whether m (O) is greater than m (P), if so, judging that the grid is an obstacle area, and if not, judging that the grid is a traffic area.
Preferably, the combined base probability distribution function is calculated by the following formula:
wherein K is an inconsistency factor for reflecting the degree of conflict between the detection results of the respective sensors.
Preferably, the inconsistency factor K is calculated by the following formula:
preferably, after acquiring detection data of n sensors in the sensor detection system, invalid data in the detection data is removed to obtain valid data, and obstacle properties of each grid are determined according to the valid data.
Preferably, the invalid data is removed by defining a ranging range threshold for each sensor.
Preferably, the n sensors include three-dimensional lidar, millimeter wave radar and binocular vision cameras.
Preferably, the grid map is divided into four regions R 1 、R 2 、R 3 、R 4 ,R 1 R represents an area where three detection data of a three-dimensional laser radar, a millimeter wave radar and a binocular vision camera exist 2 Representing an area where only two detection data of millimeter wave radar and binocular vision camera exist, R 3 Representing an area where only one detection data of the three-dimensional lidar exists, R 4 Indicating areas where no data is detected.
The unmanned ship obstacle fusion detection method based on the evidence theory has the beneficial effects that:
by combining the complementarity characteristics among multiple detection means, the fusion detection of the unmanned ship obstacle is realized by adopting the multi-source heterogeneous detection data of n different types of sensors, the complementarity among different obstacle detection sensors is fully exerted, the obstacle detection information of the unmanned ship is enriched, the false detection and omission detection of a single sensor are effectively reduced, and the reliability of the unmanned ship obstacle detection is greatly improved.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a flow chart of an unmanned ship obstacle fusion detection method provided by an embodiment of the invention;
FIGS. 2 (a) and 2 (b) are diagrams of experimental scene of detecting an obstacle on the sea of an unmanned ship according to an embodiment of the invention;
FIGS. 3 (a) and 3 (b) are raw detection data of multiple sensors of different obstacle scenes according to an embodiment of the present invention;
fig. 4 (a) and fig. 4 (b) are the preprocessing results of multi-sensor detection data of different obstacle scenes according to the embodiment of the present invention;
FIG. 5 is a schematic view of the detection areas of the three-dimensional lidar, millimeter-wave radar and binocular vision of the present invention;
fig. 6 (a) and fig. 6 (b) are obstacle fusion detection results of different scenes according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated below with reference to examples.
Aiming at the problems that a single detection means in the prior art has limitations and is difficult to meet the autonomous marine collision avoidance requirement of an unmanned ship, the application provides an unmanned ship obstacle fusion detection method based on an evidence theory, the unmanned ship is provided with a sensor detection system, the sensor detection system comprises n different types of sensors for detecting obstacles, n is an integer greater than or equal to 2, and the obstacle condition of the area where the unmanned ship is located is obtained by utilizing the fusion detection of multiple sensors, specifically, as shown in fig. 1, the method comprises the following steps:
s100, constructing a grid map of an area where the unmanned ship is located by taking the position of the unmanned ship as the center, and setting an identification frame theta of the grid as { O, P }, wherein O, P respectively represents an obstacle area and a passable area;
s200, acquiring detection data of n sensors in the sensor detection system, and taking the detection data as evidence for judging the grid obstacle attribute (O or P);
s300, determining a basic probability distribution function m of each evidence, wherein the basic probability distribution function m satisfies the following formula:
wherein A is a subset in the recognition framework Θ;
s400, judging the barrier attribute of each grid according to the detection data of the n sensors and the basic probability distribution function of the n sensors, so as to obtain the barrier condition of the area where the unmanned ship is located.
The fusion detection method combines the characteristics of complementarity among various detection means, adopts the multi-source heterogeneous detection data of n different types of sensors to realize the fusion detection of the unmanned ship obstacle, fully plays the complementarity among different obstacle detection sensors, enriches the obstacle detection information of the unmanned ship, effectively reduces the false detection and omission detection of a single sensor, and greatly improves the reliability of the unmanned ship obstacle detection.
The n different types of sensors may be any sensor capable of detecting the position of an obstacle, such as binocular vision cameras, lidar, millimeter wave radar, marine radar, etc.
In step S100, the range of the grid map of the area where the unmanned ship is located may be set according to the specific situation, preferably in combination with the offshore maximum effective ranging range of each sensor, for example, when the sensors are a three-dimensional laser radar, a millimeter wave radar and a binocular vision camera, the effective ranging range of the three-dimensional laser radar is 100m, the effective ranging range of the millimeter wave radar and the binocular vision is about 200m, and accordingly, the two-dimensional grid map constructed with the unmanned ship coordinate point as the center has a size of 400m×400m, and the resolution (grid size) of the grid map is preferably set to 5m.
In step S200, the detection data of the multi-sensor detection system is used as evidence for judging the barrier properties of the grids, the barrier properties of each grid are barrier regions or passable regions, if a sensor has detection data in a certain grid, the barrier properties of the grid detected by the sensor are barrier regions, and if the sensor does not have detection data, the barrier properties of the grid detected by the sensor are passable regions. The obstacle area is the area with the obstacle, and the passable area is the area without the obstacle and can pass. Fig. 2 (a) and (b) are experimental scene diagrams under 2 different marine barrier conditions (the number, size, and distance of the barrier are different), wherein fig. 2 (a) is a scene in which a plurality of small targets such as a boat and a buoy exist at a short distance, defined as a barrier scene I, and fig. 2 (b) is a scene in which a large target such as a large boat exists at a long distance, defined as a barrier scene II. Fig. 3 (a) and (b) are respectively the multi-sensor raw detection data of the obstacle scene I, II. As can be seen from fig. 3, the three-dimensional laser radar, the millimeter wave radar and the binocular vision can provide information such as obstacle distance, direction and the like, but certain differences exist; the obstacle information acquired between different detection sensors has complementarity, and there is false detection or omission.
In step S200, in order to ensure the accuracy of the detection, it is preferable that after the detection data of n sensors in the sensor detection system are acquired, invalid data in the detection data is first removed to obtain valid data, and then the obstacle attribute of each grid is determined according to the valid data. Removing invalid data can effectively reduce the false detection rate of the obstacle. The invalid data mainly comprises short-distance interference points generated by reflection of the unmanned ship and stern tracks, interference points outside the effective distance measurement range of the sensor and the like. In a preferred embodiment, the removal of invalid data is performed by setting a ranging range threshold, for example, when the sensor is a three-dimensional lidar, a millimeter wave radar, and a binocular vision camera, the ranging range of the three-dimensional lidar is set to: 5m < d <100m, and the ranging ranges of the millimeter wave radar and the binocular vision camera are set as follows: 10m < d <200m. Fig. 4 (a) and (b) are respectively the preprocessing results of the multi-sensor detection data of the obstacle scene I, II, and the results show that the laser radar point clouds returned by the ship are removed in the obstacle scene I, II, and in addition, the interference points outside the effective ranging range of part of the millimeter wave radar are removed in the scene II.
Further, in step S300, the method for acquiring the basic probability distribution function m of n sensors includes the following steps:
s310, dividing the grid map into areas according to the detection ranges of the sensorsR 1 ,…,R k
S320, determining probability distribution of obstacle attribute of each sensor in the corresponding area according to detection accuracy of each sensor in the corresponding area;
s330, obtaining a basic probability distribution function m of each sensor according to probability distribution of obstacle attribute of each sensor in each area 1 ,…,m n
In step S310, the principle of area division is that the detection range of the sensor is divided into areas according to the intersection of the detection ranges of the sensors, for example, when the sensor is a three-dimensional laser radar, a millimeter wave radar and a binocular vision camera, the ranging range of the three-dimensional laser radar is 100m, the horizontal angle of view is 360 °, the three-dimensional laser point cloud of the surrounding environment can be continuously obtained at a certain rotation frequency, the ranging range of the millimeter wave radar can be 250m, the horizontal angle of view is ±4° to ±9°, the target distance, the azimuth and the like can be measured simultaneously in one measurement period, the effective ranging range of the binocular vision camera is about 200m, and the horizontal angle of view is 33 °, thereby dividing the grid map into four areas R as shown in fig. 5 1 、R 2 、R 3 、R 4 ,R 1 R represents an area where three detection data of a three-dimensional laser radar, a millimeter wave radar and a binocular vision camera exist 2 Representing an area where only two detection data of millimeter wave radar and binocular vision camera exist, R 3 Representing an area where only one detection data of the three-dimensional lidar exists, R 4 Indicating areas where no data is detected.
In step S320, probability distribution is performed by fully considering the detection precision of the sensor in each area, for example, when the sensor is a three-dimensional laser radar, a millimeter wave radar or a binocular vision camera, the specific distribution process is as follows:
at R 1 In the area, since the range accuracy of the lidar is higher and the determination of the obstacle attribute of the grid is larger, if there is lidar detection data in the grid of the area, the probability distribution of the obstacle attribute of the grid determined as O is 0.9, the probability distribution of the obstacle attribute determined as P is 0.1, and if no lidar detection is performedMeasuring data, wherein the probability distribution of the barrier attribute of the grid is judged to be O to be 0.2, and the probability distribution of the barrier attribute of the grid is judged to be P to be 0.8; in the grids of the area, if millimeter wave radar detection data exist, the probability distribution of the barrier attribute of the grid is judged to be O to be 0.7, the probability distribution of the barrier attribute of the grid is judged to be P to be 0.3, and if no millimeter wave radar detection data exist, the probability distribution of the barrier attribute of the grid is judged to be O to be 0.4, and the probability distribution of the barrier attribute of the grid is judged to be P to be 0.6; in the grid of the area, if binocular vision detection data is present, the probability distribution of the barrier property of the grid for O is 0.8, the probability distribution of P is 0.2, and if binocular vision detection data is not present, the probability distribution of the barrier property of the grid for O is 0.4, and the probability distribution of P is 0.6. At R 2 The area is beyond the effective detection range of the used laser radar, and the barrier attribute of the grid is determined by binocular vision and millimeter wave radar, so that the probability distribution of the barrier attribute of the grid is 0.5 and the probability distribution of P is 0.5 in the grid of the area regardless of whether laser radar detection data exist or not; in the grids of the area, if millimeter wave radar detection data exist, the probability distribution of the barrier attribute of the grid is judged to be O to be 0.7, the probability distribution of the barrier attribute of the grid is judged to be P to be 0.3, and if no millimeter wave radar detection data exist, the probability distribution of the barrier attribute of the grid is judged to be O to be 0.4, and the probability distribution of the barrier attribute of the grid is judged to be P to be 0.6; in the grid of the area, if binocular vision detection data is present, the probability distribution of the barrier property of the grid for O is 0.8, the probability distribution of P is 0.2, and if binocular vision detection data is not present, the probability distribution of the barrier property of the grid for O is 0.4, and the probability distribution of P is 0.6. At R 3 In the area, the barrier property of the grid is determined only by the three-dimensional laser radar, so that in the grid of the area, if laser radar detection data exist, the probability distribution of the barrier property of the grid for judging O is 0.9, the probability distribution of the barrier property for judging P is 0.1, and if no laser radar detection data exist, the probability distribution of the barrier property of the grid for judging O is 0.2, and the probability distribution of the barrier property for judging P is 0.8; in addition, in the grid of the area, whether or not there is millimeter wave radar or binocular vision detection data, thisThe probability distribution of the barrier property of the grid is 0.5 when O is judged, and the probability distribution of P is 0.5 when P is judged.
And obtaining the basic probability distribution function of each sensor according to the probability distribution condition. Continuing with the sensor as three-dimensional lidar, millimeter wave radar and binocular vision camera as examples, table 1 shows R 1 、R 2 、R 3 Basic probability distribution function for three detection regions, where m l1 、m m1 、m s1 Basic probability distribution functions respectively of three-dimensional laser radar, millimeter wave radar and binocular vision under the condition of detection data, m l2 、m m2 、m s2 The basic probability distribution functions of the three sensors under the condition that the three sensors do not detect data are respectively.
TABLE 1 basic probability distribution function for evidence in different detection regions
Further, in step S400, determining the obstacle attribute of each grid according to the detection data of the n sensors and the basic probability distribution function of the n sensors includes the following steps:
s410, determining a basic probability distribution function m corresponding to each evidence in the grid according to the evidence information provided by the detection data of the n sensors 1 ,…,m n
S420, calculating to obtain combined basic probability distribution function values m (O) and m (P) in the grid according to basic probability distribution functions of n sensors;
s430, judging whether m (O) is more than m (P), if yes, judging that the grid is an obstacle area, and if not, judging that the grid is a traffic area.
In step S420, the combined basic probability distribution function is calculated according to the following formula:
wherein K is an inconsistency factor for reflecting the degree of conflict between the detection results of the respective sensors, and the larger K represents the stronger the conflict.
K is calculated by the following formula:
the process of combining the basic probability distribution functions is described below by a specific example of a grid.
If R is 1 Three-dimensional laser radar, millimeter wave radar and binocular vision 3 detection data all exist in a certain grid of the area, and then the basic probability distribution functions of the laser radar in the grid are respectively as follows: m is m l1 (O)=0.9,m l1 (P) =0.1; the basic probability distribution functions of the millimeter wave radar are respectively as follows: m is m m1 (O)=0.7,m l1 (P) =0.3; the basic probability distribution function of binocular vision is: m is m s1 (O)=0.8,m s1 (P) =0.2. The probability distribution function calculation process after the corresponding combination in the grid is as follows:
I. first calculate K
Computing a base probability distribution function after combining with respect to O
Calculating a base probability distribution function for P after combining
Since m (O) > m (P), the grid is an obstacle region.
And traversing the grid map to obtain a complete unmanned ship obstacle fusion grid expression result. Fig. 6 (a) and (b) are respectively fusion detection results of the obstacle scene I, II, and the results show that accurate perception can be realized for two close-range targets of a boat and a buoy in the scene I, so that the advantages of the three-dimensional laser radar in the close-range detection are exerted, the missed detection of the millimeter wave radar is compensated, the false detection of binocular vision is avoided, the ranging range of the three-dimensional laser radar is basically exceeded for the large boat target in the scene II, the perception can be realized for the large boat target at a longer distance (about 100 m-200 m) in front through three sensor information fusion, and meanwhile, the false detection of binocular vision is avoided.
The fusion detection method combines the characteristics of complementarity among various detection means, adopts the multi-source heterogeneous detection data of n different types of sensors to realize the fusion detection of the unmanned ship obstacle, fully plays the complementarity among different obstacle detection sensors, enriches the obstacle detection information of the unmanned ship, effectively reduces the false detection and omission detection of a single sensor, and greatly improves the reliability of the unmanned ship obstacle detection.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
While the foregoing embodiments of the present invention have been described in conjunction with the accompanying drawings, it is not intended to limit the scope of the present disclosure, and it will be apparent to those skilled in the art that various modifications or variations can be made without the need for inventive effort by those skilled in the art on the basis of the technical solutions of the present invention.

Claims (6)

1. An unmanned ship obstacle fusion detection method based on evidence theory, wherein a sensor detection system is arranged on the unmanned ship, the sensor detection system comprises n different types of sensors for detecting obstacles, and n is an integer greater than or equal to 2, and the method is characterized by comprising the following steps:
constructing a grid map of an area where the unmanned ship is located by taking the position of the unmanned ship as the center, and setting an identification frame theta of the grid as { O, P }, wherein O, P respectively represents an obstacle area and a passable area;
acquiring detection data of n sensors in the sensor detection system, and taking the detection data as evidence for judging the grid barrier attribute;
determining a basic probability distribution function m of each evidence, wherein the basic probability distribution function m satisfies the following formula:
wherein A is a subset in the recognition framework Θ;
judging the barrier attribute of each grid according to the detection data of n sensors and the basic probability distribution function of n sensors, so as to obtain the barrier condition of the area where the unmanned ship is located;
the method for acquiring the basic probability distribution function m of the n sensors comprises the following steps of:
dividing the grid map into four regions R 1 、R 2 、R 3 、R 4 ,R 1 R represents an area where three detection data of a three-dimensional laser radar, a millimeter wave radar and a binocular vision camera exist 2 Representing an area where only two detection data of millimeter wave radar and binocular vision camera exist, R 3 Representing an area where only one detection data of the three-dimensional lidar exists, R 4 An area indicating no detection data;
determining probability distribution of obstacle attribute of each sensor in the corresponding area according to detection accuracy of each sensor in the corresponding area;
based on the properties of the obstacle of the sensors in the respective areasObtaining the basic probability distribution function m of each sensor by probability distribution of each sensor 1 ,…,m 4
2. The unmanned ship obstacle fusion detection method according to claim 1, wherein determining the obstacle attribute of each grid from the detection data of the n sensors and the basic probability distribution function of the n sensors comprises the steps of:
according to evidence information provided by detection data of n sensors, determining a basic probability distribution function m corresponding to each evidence in the grid 1 ,…,m n
Calculating to obtain combined basic probability distribution function values m (O) and m (P) in the grid according to the basic probability distribution functions of the n sensors;
judging whether m (O) is greater than m (P), if so, judging that the grid is an obstacle area, and if not, judging that the grid is a traffic area.
3. The unmanned ship obstacle fusion detection method according to claim 2, wherein the combined basic probability distribution function is calculated by the following formula:
wherein K is an inconsistency factor for reflecting the degree of conflict between the detection results of the respective sensors.
4. A method of unmanned ship obstacle fusion detection as claimed in claim 3, wherein the inconsistency factor K is calculated by the formula
And (5) calculating to obtain the product.
5. The unmanned ship obstacle fusion detection method according to any one of claims 1 to 4, wherein after the detection data of n sensors in the sensor detection system are acquired, invalid data in the detection data are removed to obtain valid data, and obstacle properties of each grid are determined according to the valid data.
6. The unmanned ship obstacle fusion detection method according to claim 5, wherein the invalid data is removed by defining a ranging range threshold for each sensor.
CN202011122117.9A 2020-10-20 2020-10-20 Unmanned ship obstacle fusion detection method based on evidence theory Active CN112394726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011122117.9A CN112394726B (en) 2020-10-20 2020-10-20 Unmanned ship obstacle fusion detection method based on evidence theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011122117.9A CN112394726B (en) 2020-10-20 2020-10-20 Unmanned ship obstacle fusion detection method based on evidence theory

Publications (2)

Publication Number Publication Date
CN112394726A CN112394726A (en) 2021-02-23
CN112394726B true CN112394726B (en) 2023-08-04

Family

ID=74596042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011122117.9A Active CN112394726B (en) 2020-10-20 2020-10-20 Unmanned ship obstacle fusion detection method based on evidence theory

Country Status (1)

Country Link
CN (1) CN112394726B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963327B (en) * 2021-09-06 2023-09-08 阿波罗智能技术(北京)有限公司 Obstacle detection method, obstacle detection device, autonomous vehicle, apparatus, and storage medium
CN114111489A (en) * 2021-11-05 2022-03-01 重庆望江工业有限公司 Amphibious unmanned aerial vehicle manipulator barrier breaking equipment
CN114671380B (en) * 2022-03-23 2023-12-29 湖南星邦智能装备股份有限公司 Multi-sensor data fusion-based anti-collision method and system for overhead working truck
CN115639536B (en) * 2022-11-18 2023-03-21 陕西欧卡电子智能科技有限公司 Unmanned ship perception target detection method and device based on multi-sensor fusion
CN116358561B (en) * 2023-05-31 2023-08-15 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion
CN116908836B (en) * 2023-07-13 2024-03-08 大连海事大学 USV environment sensing method integrating multi-sensor information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636763A (en) * 2014-12-01 2015-05-20 北京工业大学 Road and obstacle detecting method based on remotely piloted vehicles
CN104657740A (en) * 2013-11-21 2015-05-27 罗伯特·博世有限公司 Method and apparatus for segmenting an occupancy grid for a surroundings model of a driver assistance system for a vehicle
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
CN109544990A (en) * 2018-12-12 2019-03-29 惠州市德赛西威汽车电子股份有限公司 A kind of method and system that parking position can be used based on real-time electronic map identification
CN110531781A (en) * 2019-08-21 2019-12-03 重庆大学 A kind of method of determining overhead transmission line and civilian unmanned plane safe distance
CN110909671A (en) * 2019-11-21 2020-03-24 大连理工大学 Grid map obstacle detection method integrating probability and height information
CN111532274A (en) * 2020-02-28 2020-08-14 南京航空航天大学 Intelligent vehicle lane change auxiliary system and method based on multi-sensor data fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9785846B2 (en) * 2015-12-23 2017-10-10 Automotive Research & Test Center Method for quantifying classification confidence of obstructions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657740A (en) * 2013-11-21 2015-05-27 罗伯特·博世有限公司 Method and apparatus for segmenting an occupancy grid for a surroundings model of a driver assistance system for a vehicle
CN104636763A (en) * 2014-12-01 2015-05-20 北京工业大学 Road and obstacle detecting method based on remotely piloted vehicles
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
CN109544990A (en) * 2018-12-12 2019-03-29 惠州市德赛西威汽车电子股份有限公司 A kind of method and system that parking position can be used based on real-time electronic map identification
CN110531781A (en) * 2019-08-21 2019-12-03 重庆大学 A kind of method of determining overhead transmission line and civilian unmanned plane safe distance
CN110909671A (en) * 2019-11-21 2020-03-24 大连理工大学 Grid map obstacle detection method integrating probability and height information
CN111532274A (en) * 2020-02-28 2020-08-14 南京航空航天大学 Intelligent vehicle lane change auxiliary system and method based on multi-sensor data fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于D-S信息融合的水下机器人地图构建方法;朱大奇;《系统仿真技术》;20120728;摘要,第182-183页第2.2-第3节 *

Also Published As

Publication number Publication date
CN112394726A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112394726B (en) Unmanned ship obstacle fusion detection method based on evidence theory
KR20220155559A (en) Autonomous navigation method using image segmentation
CN112882059B (en) Unmanned ship inland river obstacle sensing method based on laser radar
Sato et al. Multilayer lidar-based pedestrian tracking in urban environments
CN113139607B (en) Obstacle detection method and device
US20110282581A1 (en) Object and vehicle detection and tracking using 3-d laser rangefinder
JP6450294B2 (en) Object detection apparatus, object detection method, and program
CN109583416B (en) Pseudo lane line identification method and system
WO2020029706A1 (en) Dummy lane line elimination method and apparatus
CN113031004B (en) Unmanned ship water surface target detection and path planning method based on three-dimensional laser radar
CN113177593B (en) Fusion method of radar point cloud and image data in water traffic environment
CN112744217B (en) Collision detection method, travel path recommendation device, and storage medium
KR102530847B1 (en) Method and device for monitoring harbor and ship
KR102466804B1 (en) Autonomous navigation method using image segmentation
Kim et al. Artificial intelligence vision-based monitoring system for ship berthing
CN111913177A (en) Method and device for detecting target object and storage medium
Clunie et al. Development of a perception system for an autonomous surface vehicle using monocular camera, lidar, and marine radar
CN111999744A (en) Unmanned aerial vehicle multi-azimuth detection and multi-angle intelligent obstacle avoidance method
CN110596728A (en) Water surface small target detection method based on laser radar
Park et al. Autonomous collision avoidance for unmanned surface ships using onboard monocular vision
Wang et al. Estimation of ship berthing parameters based on Multi-LiDAR and MMW radar data fusion
CN112711027A (en) Tunnel internal transverse positioning method based on laser radar point cloud data
Yao et al. LiDAR-based simultaneous multi-object tracking and static mapping in nearshore scenario
Labayrade et al. Robust and fast stereovision based obstacles detection for driving safety assistance
CN111781606A (en) Novel miniaturization implementation method for fusion of laser radar and ultrasonic radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant