CN110580043B - Water surface target avoidance method based on image target identification - Google Patents

Water surface target avoidance method based on image target identification Download PDF

Info

Publication number
CN110580043B
CN110580043B CN201910738989.9A CN201910738989A CN110580043B CN 110580043 B CN110580043 B CN 110580043B CN 201910738989 A CN201910738989 A CN 201910738989A CN 110580043 B CN110580043 B CN 110580043B
Authority
CN
China
Prior art keywords
ship
obstacle
image
skyline
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910738989.9A
Other languages
Chinese (zh)
Other versions
CN110580043A (en
Inventor
鄢社锋
王凡
徐立军
汪嘉宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Wanghaichao Technology Co ltd
Institute of Acoustics CAS
Original Assignee
Institute of Oceanology of CAS
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Oceanology of CAS, Institute of Acoustics CAS filed Critical Institute of Oceanology of CAS
Priority to CN201910738989.9A priority Critical patent/CN110580043B/en
Publication of CN110580043A publication Critical patent/CN110580043A/en
Application granted granted Critical
Publication of CN110580043B publication Critical patent/CN110580043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a water surface target avoiding method based on image target identification, which comprises the following steps: step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and carrying out distortion and inclination correction on the images; step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5); step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image; step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5); step 5) judging whether a route is planned after the obstacle avoidance operation, and if so, not adjusting the route; otherwise, replanning the route to the destination; and 6) adjusting the ship route.

Description

Water surface target avoidance method based on image target identification
Technical Field
The invention relates to the field of signal processing, in particular to a water surface target avoidance method based on image target identification.
Background
When a ship sails on the water surface, the ship collides with other ships, various fixed objects or floating objects due to accidents or human factors, so that personal casualties or property loss are caused, and the collision avoidance between the ships and the obstacle avoidance of surrounding obstacles are important guarantee for the safety of the ship sailing. To avoid collision when a ship sails, in addition to improving the technical level and safety awareness of drivers with the ship, improving the evasion capability of the ship to other ships or obstacles by means of a ship collision avoidance method or a collision avoidance control strategy and other technical means is another effective way to reduce the occurrence of collision accidents, and the technology is also one of the core technologies of autonomous sailing of unmanned ships.
The obstacle avoidance of the ship can be generally carried out in two steps, firstly, obstacles are found and positioned through sensors such as radars, cameras and the like or information of various sensors is fused, then, an obstacle avoidance algorithm is adopted to plan a driving route of the ship, so that the ship can bypass the obstacles to reach a destination and keep a safe distance with the obstacles in the process of driving, and the commonly used obstacle avoidance algorithms include an artificial potential field method, an A-star algorithm, an angle control method and the like. According to the types of obstacles, the ship obstacle avoidance can be divided into static obstacle avoidance and dynamic obstacle avoidance. The static obstacles are mainly reefs, wharfs, platforms fixed on water and the like with fixed positions, and after the ship detects the static obstacles, the ship adjusts an original navigation route to bypass the obstacles and reach a destination. At present, a plurality of mature methods for static obstacle avoidance exist, and the method is widely applied to ship obstacle avoidance. The dynamic obstacle refers to other ships or floating objects with positions changing in the actual environment, and compared with the static obstacle, the position of the dynamic obstacle is not fixed, so that the avoidance difficulty is higher, and a static obstacle avoidance algorithm is not suitable for the dynamic obstacle in many cases. The dynamic obstacle avoidance needs to find the obstacle and plan the navigation route, and also needs to continuously keep monitoring the obstacle in the process of passing through the obstacle, and adjust the traveling direction at any time to avoid collision. At present, no mature method exists in the aspect of dynamic obstacle avoidance.
Disclosure of Invention
The invention aims to overcome the technical defects and provides a water surface target avoidance method based on image target identification. Compared with the prior art, the invention has the advantages of large monitoring range, low algorithm complexity, simple hardware structure, low cost and the like, and can be widely applied to ship obstacle avoidance, unmanned ship automatic driving and the like.
In order to achieve the above object, the present invention provides a method for avoiding a water surface target based on image target recognition, which is implemented by installing a 360-degree full-view camera system on a sailing ship, wherein the camera system comprises: 4 cameras arranged on the same plane at the highest position of the ship body in 4 directions, namely front, back, left and right directions, wherein the method comprises the following steps:
step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and performing distortion correction and inclination correction on the images;
step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5);
step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image;
step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5);
step 5) judging whether a navigation line is planned after the obstacle avoidance operation, and if so, keeping the original driving speed or navigation direction; otherwise, replanning the route to the destination, and driving according to the new route;
and 6) adjusting the ship route to keep the safe distance between the ship and the obstacle.
As a modification of the above method, the step 1) is preceded by: the method comprises the following steps of establishing and training a target detection model, specifically comprising:
step S1), establishing a target detection model, wherein the target detection model adopts a visual geometry group network, and converts a sixth full connection layer and a seventh full connection layer of the visual geometry group network into 2 convolutional layers by using an astraus algorithm by using the first 5 convolutional layers of the visual geometry group network; in addition, 3 convolution layers with different scales and 1 average pool layer are added, and different convolution layers are respectively used for predicting the offset of a default frame and the scores of different categories; obtaining a final target detection result through a non-maximum suppression algorithm;
step S2), collecting typical water surface target images under various weather conditions and illumination conditions, and labeling each image; as training samples;
step S3), the water surface target in each image of the training sample has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then the loss function is calculated end to end and propagated reversely, and the network parameters are adjusted by using a random gradient descent method to obtain a trained target detection model.
As an improvement of the above method, before the step 1), a step of calibrating a 360-degree full-view imaging system is further included, specifically including:
step T1), respectively carrying out distortion correction on the 4 cameras, and splicing the images of the 4 cameras into full-view images after correction;
step T2) calibrates the relationship between the abscissa and the actual angle of the calibration object in the image: setting an origin of an abscissa of the full-view-angle image, and then shooting a plurality of known calibration objects with relative angles to the ship to obtain a calibration relation between the abscissa of the calibration objects in the image and an actual angle;
step T3) calibrating the relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image: shooting a skyline when the 360-degree full-view camera system is in a plane level, and measuring the distance between the 360-degree full-view camera system and the sea level to obtain a calibration relation between the installation height of the camera and the pixel distance between the skyline and a calibration object in an image;
step T4) of calibrating the proportional relationship between the pixel distance and the actual distance, the parameters f and x of the camera are calculated by the following formula:
Figure BDA0002163264730000031
wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0For camera mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0Distance between the skyline and the ship:
Figure BDA0002163264730000032
wherein R is the radius of the earth.
As an improvement of the above method, a calibration relation between an abscissa and an actual angle of the calibration object in the image is:
θ=am2+bm+c
wherein, theta is the actual angle, m is the abscissa of the calibration object in the image, a, b and c are parameters, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
As an improvement of the above method, the calibration relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image is as follows:
h0=e1h2+e2h+e3
wherein h is0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3An estimate of (d).
As an improvement of the above method, the tilt correction is specifically: the image is rotated so that the skyline of the image remains horizontal and the ordinate of the rotated skyline is the mean of the ordinates of the skyline before rotation.
As a modification of the above method, the step 3) specifically includes;
step 3-1) calculating the actual height h of the current camera1
Figure BDA0002163264730000041
Wherein the content of the first and second substances,
Figure BDA0002163264730000042
the pixel distance between the obstacle and the skyline in the corrected image is obtained;
step 3-2) calculating the distance between the obstacle and the ship by adopting the following formula:
Figure BDA0002163264730000043
wherein the content of the first and second substances,
Figure BDA0002163264730000044
distance of the obstacle from the vessel;
step 3-3) calculating a scaling factor of the obstacle
Figure BDA0002163264730000045
And estimating the size of the obstacle;
step 3-4) estimating the relative movement speed and the advancing direction of the obstacle and the ship by using the actual angle and distance change of the obstacle in continuous multi-frame images, thereby obtaining the movement track of the obstacle; wherein the actual angle of the obstacle
Figure BDA0002163264730000046
Comprises the following steps:
Figure BDA0002163264730000047
wherein the content of the first and second substances,
Figure BDA0002163264730000048
the abscissa of the obstacle in the corrected image.
As an improvement of the above method, the specific process of judging whether there is a possibility of collision between the moving track of the obstacle and the ship route is as follows:
drawing a moving track of the barrier and a ship route, and judging whether an intersection point exists between the moving track and the ship route; if the intersection point does not exist, the obstacle and the ship have no possibility of collision; otherwise, the time of the obstacle and the time of the ship reaching the intersection point are respectively calculated, if the difference between the two times is larger than a first threshold value, the obstacle and the ship are not likely to collide, otherwise, the obstacle and the ship are likely to collide, and the value range of the first threshold value is 10-15 minutes.
As an improvement of the above method, the step 6) specifically includes:
adjusting the running speed of the ship to enable the absolute value of the difference between the time of the ship before the adjustment and the time of the ship reaching the obstacle and the time of the ship after the adjustment and the time of the ship reaching the obstacle and the time of the ship crossing point to be greater than the preset time;
if the adjustment amount of the running speed of the ship is smaller than or equal to the second threshold value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the second threshold is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds a second threshold value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as a preset value and the current position of the ship.
Compared with the prior art, the invention has the advantages that:
1. compared with the obstacle avoidance technology based on the binocular, the method has the advantages that the environment is monitored by using the 360-degree full-view-angle camera system, the monitoring range is larger, the 360-degree collision avoidance can be realized, meanwhile, the distance, the size, the direction, the speed and the like of dynamic obstacles such as ships and the like are measured and calculated by adopting an image recognition method, the method is simpler, and the response is quicker;
2. compared with the obstacle avoidance technology based on radar, the 360-degree full-view camera system can provide more environment and obstacle information, so that the obstacle judgment is more accurate;
3. compared with the obstacle avoidance technology based on multiple sensors, the obstacle avoidance method has the advantages that the hardware structure is simpler, and the cost is low;
4. compared with the calculation of the distance of the obstacle based on two or more eyes, the method realizes the monocular distance calculation of the obstacle by using the skyline, and has lower hardware complexity and cost.
Drawings
FIG. 1 is a flow chart of a water surface target avoidance method based on image target recognition.
Detailed Description
The technical solutions of the present invention are further described below with reference to the drawings and examples, but the embodiments of the present invention are not limited thereto.
The embodiment of the invention is applied to an unmanned ship capable of autonomous navigation, 1 camera is respectively arranged on the same plane of the highest position of a ship body in the front, back, left and right directions, and the 360-degree full view angle around the ship can be covered after the view fields of the 4 cameras are combined. The above embodiment implements the method for avoiding the water surface target based on image target recognition by using the full-view camera system combined by the above 4 cameras, which specifically includes the following steps, and the flow of the method is shown in fig. 1:
step 1: and establishing a target detection model for identifying the water surface target image, and calibrating the 360-degree full-view camera system.
A target detection model for identifying a water surface target image is established by adopting an SSD (Single Shot Multi Box Detector) target detection model in deep learning. The basic network of the SSD model adopts a VGG16(visual geometry Group) network; fc6 (6 th fully connected layer) and fc7 (7 th fully connected layer) of the VGG are converted to 2 convolutional layers using the astraus algorithm using the first 5 sets of convolutional layers of the VGG. In addition, 3 convolution layers with different scales and 1 average pool layer are additionally added, and the different convolution layers are respectively used for predicting the offset of the default frame and the scores of different categories. And finally, obtaining a final detection result through a non-maximum suppression algorithm. In the above embodiment, typical water surface target images under various weather conditions and lighting conditions are collected first, and each image is labeled. And (3) the water surface target in each image has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then a loss function is calculated end to end and propagated in the opposite direction, network parameters are adjusted by using a random gradient descent method, and finally a trained target detection model is obtained.
The calibration of the 360-degree full-view camera system comprises the following steps:
step 1.1: the distortion of the imaging system is corrected.
In the above embodiment, distortion correction is performed on each camera, and then the images of the 4 cameras are spliced into a full view image.
Step 1.2: and calibrating the relation between the abscissa and the actual angle of the object in the image.
In the above embodiment, the origin of the abscissa of the full-view image is set first, and then a plurality of calibration objects with known angles relative to the unmanned ship are photographed, so as to obtain the calibration relationship between the abscissa of the object in the image and the actual angle. In the above embodiments, the quadratic function θ ═ am is used in the above embodiments2And + bm + c to fit the relation between the abscissa and the actual angle in the image, wherein theta is the actual angle, m is the abscissa of the object in the image, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
Step 1.3: and calibrating the relation between the height of the skyline in the image and the installation height of the camera system.
In the above embodiment, the skyline is photographed when the camera system is located on the horizontal plane, and the distance between the camera system and the sea level at this time is measured, so as to obtain the calibration relationship between the installation height of the camera system and the position of the photographed skyline in the image. In the above embodiments, the quadratic function h is adopted0=e1h2+e2h+e3To fit a calibrated relationship between the height at which the camera system is mounted and the position of the photographed skyline in the image, where h0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3An estimate of (d).
Step 1.4: in step 1.4, the proportional relationship between the pixel distance and the actual distance is calibrated, and the parameters f and x of the camera are calculated by the following formula:
Figure BDA0002163264730000061
wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0For taking picturesHead mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0For the distance between the skyline and the ship, the following formula is adopted for calculation
Figure BDA0002163264730000071
Wherein R is the radius of the earth.
Step 2: when the ship sails, the 360-degree full-view-angle camera system arranged on the ship is used for collecting the images of the water surface around the ship and correcting the distortion and the inclination of the images.
The distortion correction for the image is the same as step 1.1. The tilt correction of the image is performed by rotating the image so that the skyline therein is kept horizontal, and making the ordinate of the rotated skyline the mean of the ordinates of the skyline before the rotation.
And step 3: and (5) identifying the skyline in the image and whether the obstacle exists or not by using the target detection model established in the step (1), if the obstacle does not exist in the image, turning to the step (5), otherwise, entering the step (4).
And detecting the obstacle through the trained SSD network, inputting the shot image into the trained SSD network to obtain a detection result of whether the obstacle exists, and obtaining the position of the obstacle in the image when the obstacle exists.
And 4, step 4: the distance and size of the obstacle are estimated using the skyline and the position of the object in the image, and the speed and direction of travel of the obstacle are estimated using the multiple frames of images.
The distance and size of the obstacle are estimated using the following method:
step 4.1: calculating the height h of the current camera according to the relation between the height of the skyline in the image and the installation height of the camera1
Figure BDA0002163264730000072
Wherein the content of the first and second substances,
Figure BDA0002163264730000073
the pixel distance between the obstacle and the skyline in the corrected image is obtained;
h1and (4) calculating the relationship between the height of the skyline in the image and the installation height of the camera system, which is calibrated in the step 1.3.
Step 4.2: calculating the distance to the obstacle using the following equation
Figure BDA0002163264730000074
Wherein
Figure BDA0002163264730000075
Is the distance of the obstacle from the vessel.
Step 4.3: calculating a scaling factor for an obstacle
Figure BDA0002163264730000076
And estimate the size of the obstacle.
And estimating the relative speed and the traveling direction of the obstacle and the unmanned ship by using the change of the angle and the distance of the obstacle in the continuous multi-frame images.
And 5: if no obstacle exists or the moving track of the obstacle has no possibility of colliding with the ship route, and the original running speed or the original running direction is kept when no obstacle avoidance operation is performed or the route is planned after the obstacle avoidance; if the moving track of the obstacle does not collide with the ship route, and the situation that the route is not planned after obstacle avoidance operation exists, replanning the route to the destination, and driving according to a new route; if the moving track of the obstacle has the possibility of collision with the ship route, the running speed or the sailing direction of the ship is adjusted, so that the safe distance is kept between the ship route and the obstacle.
The following method is adopted to judge whether the moving track of the barrier and the ship route have the possibility of collision or not: and calculating the moving track of the barrier and the ship route according to the moving speed and the moving direction of the barrier, and judging whether the barrier and the ship route have an intersection. If the intersection point does not exist, the moving track of the barrier and the ship route have no possibility of collision; otherwise, respectively calculating the time of the barrier and the time of the ship reaching the intersection point, if the time difference between the barrier and the ship reaching the intersection point is larger than a preset value, the moving track of the barrier and the ship route has no possibility of collision, otherwise, the moving track of the barrier and the ship route have the possibility of collision; the preset value ranges from 10 to 15 minutes.
When the moving track of the obstacle has the possibility of collision with the ship route, firstly, adjusting the running speed of the ship to ensure that the absolute value of the difference between the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route before adjustment and the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route after adjustment is greater than the preset time;
if the adjustment quantity of the running speed of the ship is smaller than or equal to the preset value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the preset value is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds the preset value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as the preset value and the current position of the ship.
Step 6: and if the ship receives a command of stopping sailing, stopping moving, and otherwise, continuing to execute the step 2.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

1. A water surface target avoiding method based on image target identification is realized by installing a 360-degree full-view camera system on a sailing ship, and the camera system comprises: 4 cameras arranged on the same plane at the highest position of the ship body in 4 directions, namely front, back, left and right directions, wherein the method comprises the following steps:
step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and performing distortion correction and inclination correction on the images;
step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5);
step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image;
step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5);
step 5) judging whether a navigation line is planned after the obstacle avoidance operation, and if so, keeping the original driving speed or navigation direction; otherwise, replanning the route to the destination, and driving according to the new route;
step 6), adjusting the course of the ship to keep the safe distance between the ship and the obstacle;
before the step 1), the method further comprises a step of calibrating the 360-degree full-view camera system, and specifically comprises the following steps:
step T1), respectively carrying out distortion correction on the 4 cameras, and splicing the images of the 4 cameras into full-view images after correction;
step T2) calibrates the relationship between the abscissa and the actual angle of the calibration object in the image: setting an origin of an abscissa of the full-view-angle image, and then shooting a plurality of known calibration objects with relative angles to the ship to obtain a calibration relation between the abscissa of the calibration objects in the image and an actual angle;
step T3) calibrating the relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image: shooting a skyline when the 360-degree full-view camera system is in a plane level, and measuring the distance between the 360-degree full-view camera system and the sea level to obtain a calibration relation between the installation height of the camera and the pixel distance between the skyline and a calibration object in an image;
step T4) of calibrating the proportional relationship between the pixel distance and the actual distance, the parameters f and x of the camera are calculated by the following formula:
Figure FDA0002480270250000011
wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0For camera mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0Distance between the skyline and the ship:
Figure FDA0002480270250000021
wherein R is the radius of the earth;
the calibration relation between the abscissa and the actual angle of the calibration object in the image is as follows:
θ=am2+bm+c
wherein theta is an actual angle, m is an abscissa of the calibration object in the image, a, b and c are parameters, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method;
the calibration relation between the camera installation height and the pixel distance between the skyline and the calibration object in the image is as follows:
h0=e1h2+e2h+e3
wherein h is0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3An estimated value of (d);
the tilt correction is specifically: rotating the image to enable the skyline of the image to be kept horizontal, and enabling the ordinate of the skyline after rotation to be the mean value of the ordinate of the skyline before rotation;
the step 3) specifically comprises the following steps:
step 3-1) calculating the actual height h of the current camera1
Figure FDA0002480270250000022
Wherein the content of the first and second substances,
Figure FDA0002480270250000023
the pixel distance between the obstacle and the skyline in the corrected image is obtained;
step 3-2) calculating the distance between the obstacle and the ship by adopting the following formula:
Figure FDA0002480270250000024
wherein the content of the first and second substances,
Figure FDA0002480270250000025
distance of the obstacle from the vessel;
step 3-3) calculating a scaling factor of the obstacle
Figure FDA0002480270250000026
And estimating the size of the obstacle;
step 3-4) estimating the relative movement speed and the advancing direction of the obstacle and the ship by using the actual angle and distance change of the obstacle in continuous multi-frame images, thereby obtaining the movement track of the obstacle; wherein the actual angle of the obstacle
Figure FDA0002480270250000031
Comprises the following steps:
Figure FDA0002480270250000032
wherein the content of the first and second substances,
Figure FDA0002480270250000033
the abscissa of the obstacle in the corrected image.
2. The image target identification-based water surface target avoidance method according to claim 1, wherein the step 1) is preceded by: the method comprises the following steps of establishing and training a target detection model, specifically comprising:
step S1), establishing a target detection model, wherein the target detection model adopts a visual geometry group network, and converts a sixth full connection layer and a seventh full connection layer of the visual geometry group network into 2 convolutional layers by using an astraus algorithm by using the first 5 convolutional layers of the visual geometry group network; in addition, 3 convolution layers with different scales and 1 average pool layer are added, and different convolution layers are respectively used for predicting the offset of a default frame and the scores of different categories; obtaining a final target detection result through a non-maximum suppression algorithm;
step S2), collecting typical water surface target images under various weather conditions and illumination conditions, and labeling each image; as training samples;
step S3), the water surface target in each image of the training sample has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then the loss function is calculated end to end and propagated reversely, and the network parameters are adjusted by using a random gradient descent method to obtain a trained target detection model.
3. The image target recognition-based water surface target avoidance method according to claim 1, wherein the specific process of judging whether the moving track of the obstacle and the ship route have the possibility of collision is as follows:
drawing a moving track of the barrier and a ship route, and judging whether an intersection point exists between the moving track and the ship route; if the intersection point does not exist, the obstacle and the ship have no possibility of collision; otherwise, the time of the obstacle and the time of the ship reaching the intersection point are respectively calculated, if the difference between the two times is larger than a first threshold value, the obstacle and the ship are not likely to collide, otherwise, the obstacle and the ship are likely to collide, and the value range of the first threshold value is 10-15 minutes.
4. The image target identification-based water surface target avoidance method according to claim 3, wherein the step 6) specifically comprises:
adjusting the running speed of the ship to enable the absolute value of the difference between the time of the ship before the adjustment and the time of the ship reaching the obstacle and the time of the ship after the adjustment and the time of the ship reaching the obstacle and the time of the ship crossing point to be greater than the preset time;
if the adjustment amount of the running speed of the ship is smaller than or equal to the second threshold value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the second threshold is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds a second threshold value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as a preset value and the current position of the ship.
CN201910738989.9A 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification Active CN110580043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910738989.9A CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910738989.9A CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Publications (2)

Publication Number Publication Date
CN110580043A CN110580043A (en) 2019-12-17
CN110580043B true CN110580043B (en) 2020-09-08

Family

ID=68810739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910738989.9A Active CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Country Status (1)

Country Link
CN (1) CN110580043B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310646B (en) * 2020-02-12 2023-11-21 智慧航海(青岛)科技有限公司 Method for improving navigation safety based on real-time detection of remote images
TWI793477B (en) * 2020-12-04 2023-02-21 財團法人船舶暨海洋產業研發中心 Assistance system for correcting vessel path and operation method thereof
CN112634661A (en) * 2020-12-25 2021-04-09 迈润智能科技(上海)有限公司 Intelligent berthing assisting method and system, computer equipment and storage medium
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN114253297B (en) * 2021-12-24 2023-12-12 交通运输部天津水运工程科学研究所 Method for actively and safely tracking tail gas of ship tail gas detection rotor unmanned aerial vehicle
CN114906290B (en) * 2022-06-18 2023-06-02 广东中威复合材料有限公司 Ferry with energy-saving hull molded line structure and collision risk assessment system thereof
CN116736867B (en) * 2023-08-10 2023-11-10 湖南湘船重工有限公司 Unmanned ship obstacle avoidance control system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2733061B1 (en) * 1995-04-13 1997-05-23 Sextant Avionique OPTOELECTRONIC DEVICE FOR AIDING PILOTAGE OF AN AIRCRAFT WITH POOR VISIBILITY
CN103697883B (en) * 2014-01-07 2016-03-30 中国人民解放军国防科学技术大学 A kind of aircraft horizontal attitude defining method based on skyline imaging
CN107305632B (en) * 2017-02-16 2020-06-12 武汉极目智能技术有限公司 Monocular computer vision technology-based target object distance measuring method and system
CN107301646B (en) * 2017-06-27 2019-09-17 深圳市云洲创新科技有限公司 Unmanned boat intelligent barrier avoiding method and apparatus based on monocular vision
CN109685858B (en) * 2018-12-29 2020-12-04 北京茵沃汽车科技有限公司 Monocular camera online calibration method
CN109919026B (en) * 2019-01-30 2023-06-30 华南理工大学 Surface unmanned ship local path planning method

Also Published As

Publication number Publication date
CN110580043A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
CN110580043B (en) Water surface target avoidance method based on image target identification
CN110517521B (en) Lane departure early warning method based on road-vehicle fusion perception
EP2771657B1 (en) Wading apparatus and method
KR20220155559A (en) Autonomous navigation method using image segmentation
US9665781B2 (en) Moving body detection device and moving body detection method
US9740942B2 (en) Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method
US8120644B2 (en) Method and system for the dynamic calibration of stereovision cameras
CN102806913B (en) Novel lane line deviation detection method and device
US11900668B2 (en) System and method for identifying an object in water
CN107728618A (en) A kind of barrier-avoiding method of unmanned boat
CN110298216A (en) Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness
CN107479032B (en) Object detection system for an automated vehicle
CN110738081B (en) Abnormal road condition detection method and device
CN112581795B (en) Video-based real-time early warning method and system for ship bridge and ship-to-ship collision
CN109753841B (en) Lane line identification method and device
CN110658826A (en) Autonomous berthing method of under-actuated unmanned surface vessel based on visual servo
CN105300390B (en) The determination method and device of obstructing objects movement locus
US6956959B2 (en) Apparatus for recognizing environment
CN109410598B (en) Traffic intersection congestion detection method based on computer vision
CN111199177A (en) Automobile rearview pedestrian detection alarm method based on fisheye image correction
CN114913494B (en) Self-diagnosis calibration method for risk assessment of automatic driving visual perception redundant system
US11892854B2 (en) Assistance system for correcting vessel path and operation method thereof
CN114022775B (en) Water multi-target video image coordinate estimation method based on radar scanning variable
CN115792912A (en) Method and system for sensing environment of unmanned surface vehicle based on fusion of vision and millimeter wave radar under weak observation condition
CN115639536A (en) Unmanned ship perception target detection method and device based on multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231007

Address after: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee after: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Patentee before: Institute of Oceanology Chinese Academy of Sciences

Effective date of registration: 20231007

Address after: 311800 Room 101, 1st floor, 22 Zhongjie building, 78 Zhancheng Avenue, Taozhu street, Zhuji City, Shaoxing City, Zhejiang Province

Patentee after: Zhejiang wanghaichao Technology Co.,Ltd.

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

TR01 Transfer of patent right