CN110580043A - Water surface target avoidance method based on image target identification - Google Patents

Water surface target avoidance method based on image target identification Download PDF

Info

Publication number
CN110580043A
CN110580043A CN201910738989.9A CN201910738989A CN110580043A CN 110580043 A CN110580043 A CN 110580043A CN 201910738989 A CN201910738989 A CN 201910738989A CN 110580043 A CN110580043 A CN 110580043A
Authority
CN
China
Prior art keywords
ship
obstacle
image
skyline
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910738989.9A
Other languages
Chinese (zh)
Other versions
CN110580043B (en
Inventor
鄢社锋
王凡
徐立军
汪嘉宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Wanghaichao Technology Co ltd
Institute of Acoustics CAS
Original Assignee
Institute of Oceanology of CAS
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Oceanology of CAS, Institute of Acoustics CAS filed Critical Institute of Oceanology of CAS
Priority to CN201910738989.9A priority Critical patent/CN110580043B/en
Publication of CN110580043A publication Critical patent/CN110580043A/en
Application granted granted Critical
Publication of CN110580043B publication Critical patent/CN110580043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a water surface target avoiding method based on image target identification, which comprises the following steps: step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and carrying out distortion and inclination correction on the images; step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5); step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image; step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5); step 5) judging whether a route is planned after the obstacle avoidance operation, and if so, not adjusting the route; otherwise, replanning the route to the destination; and 6) adjusting the ship route.

Description

water surface target avoidance method based on image target identification
Technical Field
the invention relates to the field of signal processing, in particular to a water surface target avoidance method based on image target identification.
background
When a ship sails on the water surface, the ship collides with other ships, various fixed objects or floating objects due to accidents or human factors, so that personal casualties or property loss are caused, and the collision avoidance between the ships and the obstacle avoidance of surrounding obstacles are important guarantee for the safety of the ship sailing. To avoid collision when a ship sails, in addition to improving the technical level and safety awareness of drivers with the ship, improving the evasion capability of the ship to other ships or obstacles by means of a ship collision avoidance method or a collision avoidance control strategy and other technical means is another effective way to reduce the occurrence of collision accidents, and the technology is also one of the core technologies of autonomous sailing of unmanned ships.
the obstacle avoidance of the ship can be generally carried out in two steps, firstly, obstacles are found and positioned through sensors such as radars, cameras and the like or information of various sensors is fused, then, an obstacle avoidance algorithm is adopted to plan a driving route of the ship, so that the ship can bypass the obstacles to reach a destination and keep a safe distance with the obstacles in the process of driving, and the commonly used obstacle avoidance algorithms include an artificial potential field method, an A-star algorithm, an angle control method and the like. According to the types of obstacles, the ship obstacle avoidance can be divided into static obstacle avoidance and dynamic obstacle avoidance. The static obstacles are mainly reefs, wharfs, platforms fixed on water and the like with fixed positions, and after the ship detects the static obstacles, the ship adjusts an original navigation route to bypass the obstacles and reach a destination. At present, a plurality of mature methods for static obstacle avoidance exist, and the method is widely applied to ship obstacle avoidance. The dynamic obstacle refers to other ships or floating objects with positions changing in the actual environment, and compared with the static obstacle, the position of the dynamic obstacle is not fixed, so that the avoidance difficulty is higher, and a static obstacle avoidance algorithm is not suitable for the dynamic obstacle in many cases. The dynamic obstacle avoidance needs to find the obstacle and plan the navigation route, and also needs to continuously keep monitoring the obstacle in the process of passing through the obstacle, and adjust the traveling direction at any time to avoid collision. At present, no mature method exists in the aspect of dynamic obstacle avoidance.
Disclosure of Invention
The invention aims to overcome the technical defects and provides a water surface target avoidance method based on image target identification. Compared with the prior art, the invention has the advantages of large monitoring range, low algorithm complexity, simple hardware structure, low cost and the like, and can be widely applied to ship obstacle avoidance, unmanned ship automatic driving and the like.
in order to achieve the above object, the present invention provides a method for avoiding a water surface target based on image target recognition, which is implemented by installing a 360-degree full-view camera system on a sailing ship, wherein the camera system comprises: 4 cameras arranged on the same plane at the highest position of the ship body in 4 directions, namely front, back, left and right directions, wherein the method comprises the following steps:
step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and performing distortion correction and inclination correction on the images;
step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5);
Step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image;
Step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5);
Step 5) judging whether a navigation line is planned after the obstacle avoidance operation, and if so, keeping the original driving speed or navigation direction; otherwise, replanning the route to the destination, and driving according to the new route;
and 6) adjusting the ship route to keep the safe distance between the ship and the obstacle.
As a modification of the above method, the step 1) is preceded by: the method comprises the following steps of establishing and training a target detection model, specifically comprising:
step S1), establishing a target detection model, wherein the target detection model adopts a visual geometry group network, and converts a sixth full connection layer and a seventh full connection layer of the visual geometry group network into 2 convolutional layers by using an astraus algorithm by using the first 5 convolutional layers of the visual geometry group network; in addition, 3 convolution layers with different scales and 1 average pool layer are added, and different convolution layers are respectively used for predicting the offset of a default frame and the scores of different categories; obtaining a final target detection result through a non-maximum suppression algorithm;
step S2), collecting typical water surface target images under various weather conditions and illumination conditions, and labeling each image; as training samples;
step S3), the water surface target in each image of the training sample has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then the loss function is calculated end to end and propagated reversely, and the network parameters are adjusted by using a random gradient descent method to obtain a trained target detection model.
As an improvement of the above method, before the step 1), a step of calibrating a 360-degree full-view imaging system is further included, specifically including:
Step T1), respectively carrying out distortion correction on the 4 cameras, and splicing the images of the 4 cameras into full-view images after correction;
Step T2) calibrates the relationship between the abscissa and the actual angle of the calibration object in the image: setting an origin of an abscissa of the full-view-angle image, and then shooting a plurality of known calibration objects with relative angles to the ship to obtain a calibration relation between the abscissa of the calibration objects in the image and an actual angle;
step T3) calibrating the relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image: shooting a skyline when the 360-degree full-view camera system is in a plane level, and measuring the distance between the 360-degree full-view camera system and the sea level to obtain a calibration relation between the installation height of the camera and the pixel distance between the skyline and a calibration object in an image;
step T4) of calibrating the proportional relationship between the pixel distance and the actual distance, the parameters f and x of the camera are calculated by the following formula:
Wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0for camera mounting height, d1is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0Distance between the skyline and the ship:
wherein R is the radius of the earth.
As an improvement of the above method, a calibration relation between an abscissa and an actual angle of the calibration object in the image is:
θ=am2+bm+c
wherein, theta is the actual angle, m is the abscissa of the calibration object in the image, a, b and c are parameters, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
As an improvement of the above method, the calibration relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image is as follows:
h0=e1h2+e2h+e3
wherein h is0for the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3an estimate of (d).
As an improvement of the above method, the tilt correction is specifically: the image is rotated so that the skyline of the image remains horizontal and the ordinate of the rotated skyline is the mean of the ordinates of the skyline before rotation.
As a modification of the above method, the step 3) specifically includes;
Step 3-1) calculating the actual height h of the current camera1
Wherein the content of the first and second substances,the pixel distance between the obstacle and the skyline in the corrected image is obtained;
step 3-2) calculating the distance between the obstacle and the ship by adopting the following formula:
Wherein the content of the first and second substances,Distance of the obstacle from the vessel;
Step 3-3) calculating a scaling factor of the obstacleand estimating the size of the obstacle;
Step 3-4) estimating the relative movement speed and the advancing direction of the obstacle and the ship by using the actual angle and distance change of the obstacle in continuous multi-frame images, thereby obtaining the movement track of the obstacle; wherein the actual angle of the obstacleComprises the following steps:
wherein the content of the first and second substances,The abscissa of the obstacle in the corrected image.
as an improvement of the above method, the specific process of judging whether there is a possibility of collision between the moving track of the obstacle and the ship route is as follows:
Drawing a moving track of the barrier and a ship route, and judging whether an intersection point exists between the moving track and the ship route; if the intersection point does not exist, the obstacle and the ship have no possibility of collision; otherwise, the time of the obstacle and the time of the ship reaching the intersection point are respectively calculated, if the difference between the two times is larger than a first threshold value, the obstacle and the ship are not likely to collide, otherwise, the obstacle and the ship are likely to collide, and the value range of the first threshold value is 10-15 minutes.
As an improvement of the above method, the step 6) specifically includes:
Adjusting the running speed of the ship to enable the absolute value of the difference between the time of the ship before the adjustment and the time of the ship reaching the obstacle and the time of the ship after the adjustment and the time of the ship reaching the obstacle and the time of the ship crossing point to be greater than the preset time;
if the adjustment amount of the running speed of the ship is smaller than or equal to the second threshold value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the second threshold is as follows: 30% -50% of the normal running speed;
If the adjustment quantity of the running speed of the ship exceeds a second threshold value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as a preset value and the current position of the ship.
compared with the prior art, the invention has the advantages that:
1. compared with the obstacle avoidance technology based on the binocular, the method has the advantages that the environment is monitored by using the 360-degree full-view-angle camera system, the monitoring range is larger, the 360-degree collision avoidance can be realized, meanwhile, the distance, the size, the direction, the speed and the like of dynamic obstacles such as ships and the like are measured and calculated by adopting an image recognition method, the method is simpler, and the response is quicker;
2. compared with the obstacle avoidance technology based on radar, the 360-degree full-view camera system can provide more environment and obstacle information, so that the obstacle judgment is more accurate;
3. compared with the obstacle avoidance technology based on multiple sensors, the obstacle avoidance method has the advantages that the hardware structure is simpler, and the cost is low;
4. Compared with the calculation of the distance of the obstacle based on two or more eyes, the method realizes the monocular distance calculation of the obstacle by using the skyline, and has lower hardware complexity and cost.
Drawings
FIG. 1 is a flow chart of a water surface target avoidance method based on image target recognition.
Detailed Description
the technical solutions of the present invention are further described below with reference to the drawings and examples, but the embodiments of the present invention are not limited thereto.
The embodiment of the invention is applied to an unmanned ship capable of autonomous navigation, 1 camera is respectively arranged on the same plane of the highest position of a ship body in the front, back, left and right directions, and the 360-degree full view angle around the ship can be covered after the view fields of the 4 cameras are combined. The above embodiment implements the method for avoiding the water surface target based on image target recognition by using the full-view camera system combined by the above 4 cameras, which specifically includes the following steps, and the flow of the method is shown in fig. 1:
step 1: and establishing a target detection model for identifying the water surface target image, and calibrating the 360-degree full-view camera system.
A target detection model for identifying a water surface target image is established by adopting an SSD (Single Shot Multi Box Detector) target detection model in deep learning. The basic network of the SSD model adopts a VGG16(visual geometry Group) network; fc6 (6 th fully connected layer) and fc7 (7 th fully connected layer) of the VGG are converted to 2 convolutional layers using the astraus algorithm using the first 5 sets of convolutional layers of the VGG. In addition, 3 convolution layers with different scales and 1 average pool layer are additionally added, and the different convolution layers are respectively used for predicting the offset of the default frame and the scores of different categories. And finally, obtaining a final detection result through a non-maximum suppression algorithm. In the above embodiment, typical water surface target images under various weather conditions and lighting conditions are collected first, and each image is labeled. And (3) the water surface target in each image has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then a loss function is calculated end to end and propagated in the opposite direction, network parameters are adjusted by using a random gradient descent method, and finally a trained target detection model is obtained.
the calibration of the 360-degree full-view camera system comprises the following steps:
Step 1.1: the distortion of the imaging system is corrected.
in the above embodiment, distortion correction is performed on each camera, and then the images of the 4 cameras are spliced into a full view image.
Step 1.2: and calibrating the relation between the abscissa and the actual angle of the object in the image.
In the above embodiment, the origin of the abscissa of the full-view image is set first, and then a plurality of calibration objects with known angles relative to the unmanned ship are photographed, so as to obtain the calibration relationship between the abscissa of the object in the image and the actual angle. In the above embodiments, the quadratic function θ ═ am is used in the above embodiments2And + bm + c to fit the relation between the abscissa and the actual angle in the image, wherein theta is the actual angle, m is the abscissa of the object in the image, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
step 1.3: and calibrating the relation between the height of the skyline in the image and the installation height of the camera system.
in the above embodiment, the skyline is photographed when the camera system is at the horizontal plane, and the distance between the camera system and the sea level at this time is measured, so as to obtain the installation height of the camera system and the position of the photographed skyline in the imagethe calibration relationship between them. In the above embodiments, the quadratic function h is adopted0=e1h2+e2h+e3To fit a calibrated relationship between the height at which the camera system is mounted and the position of the photographed skyline in the image, where h0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2and e3An estimate of (d).
Step 1.4: in step 1.4, the proportional relationship between the pixel distance and the actual distance is calibrated, and the parameters f and x of the camera are calculated by the following formula:
Wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0for camera mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0For the distance between the skyline and the ship, the following formula is adopted for calculation
wherein R is the radius of the earth.
Step 2: when the ship sails, the 360-degree full-view-angle camera system arranged on the ship is used for collecting the images of the water surface around the ship and correcting the distortion and the inclination of the images.
the distortion correction for the image is the same as step 1.1. The tilt correction of the image is performed by rotating the image so that the skyline therein is kept horizontal, and making the ordinate of the rotated skyline the mean of the ordinates of the skyline before the rotation.
and step 3: and (5) identifying the skyline in the image and whether the obstacle exists or not by using the target detection model established in the step (1), if the obstacle does not exist in the image, turning to the step (5), otherwise, entering the step (4).
and detecting the obstacle through the trained SSD network, inputting the shot image into the trained SSD network to obtain a detection result of whether the obstacle exists, and obtaining the position of the obstacle in the image when the obstacle exists.
and 4, step 4: the distance and size of the obstacle are estimated using the skyline and the position of the object in the image, and the speed and direction of travel of the obstacle are estimated using the multiple frames of images.
The distance and size of the obstacle are estimated using the following method:
Step 4.1: calculating the height h of the current camera according to the relation between the height of the skyline in the image and the installation height of the camera1
wherein the content of the first and second substances,The pixel distance between the obstacle and the skyline in the corrected image is obtained;
h1and (4) calculating the relationship between the height of the skyline in the image and the installation height of the camera system, which is calibrated in the step 1.3.
step 4.2: calculating the distance to the obstacle using the following equation
whereinIs the distance of the obstacle from the vessel.
Step 4.3: calculating a scaling factor for an obstacleAnd estimate the size of the obstacle.
And estimating the relative speed and the traveling direction of the obstacle and the unmanned ship by using the change of the angle and the distance of the obstacle in the continuous multi-frame images.
And 5: if no obstacle exists or the moving track of the obstacle has no possibility of colliding with the ship route, and the original running speed or the original running direction is kept when no obstacle avoidance operation is performed or the route is planned after the obstacle avoidance; if the moving track of the obstacle does not collide with the ship route, and the situation that the route is not planned after obstacle avoidance operation exists, replanning the route to the destination, and driving according to a new route; if the moving track of the obstacle has the possibility of collision with the ship route, the running speed or the sailing direction of the ship is adjusted, so that the safe distance is kept between the ship route and the obstacle.
The following method is adopted to judge whether the moving track of the barrier and the ship route have the possibility of collision or not: and calculating the moving track of the barrier and the ship route according to the moving speed and the moving direction of the barrier, and judging whether the barrier and the ship route have an intersection. If the intersection point does not exist, the moving track of the barrier and the ship route have no possibility of collision; otherwise, respectively calculating the time of the barrier and the time of the ship reaching the intersection point, if the time difference between the barrier and the ship reaching the intersection point is larger than a preset value, the moving track of the barrier and the ship route has no possibility of collision, otherwise, the moving track of the barrier and the ship route have the possibility of collision; the preset value ranges from 10 to 15 minutes.
when the moving track of the obstacle has the possibility of collision with the ship route, firstly, adjusting the running speed of the ship to ensure that the absolute value of the difference between the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route before adjustment and the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route after adjustment is greater than the preset time;
If the adjustment quantity of the running speed of the ship is smaller than or equal to the preset value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the preset value is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds the preset value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as the preset value and the current position of the ship.
Step 6: and if the ship receives a command of stopping sailing, stopping moving, and otherwise, continuing to execute the step 2.
finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A water surface target avoiding method based on image target identification is realized by installing a 360-degree full-view camera system on a sailing ship, and the camera system comprises: 4 cameras arranged on the same plane at the highest position of the ship body in 4 directions, namely front, back, left and right directions, wherein the method comprises the following steps:
Step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and performing distortion correction and inclination correction on the images;
Step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5);
Step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image;
Step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5);
step 5) judging whether a navigation line is planned after the obstacle avoidance operation, and if so, keeping the original driving speed or navigation direction; otherwise, replanning the route to the destination, and driving according to the new route;
and 6) adjusting the ship route to keep the safe distance between the ship and the obstacle.
2. the image target identification-based water surface target avoidance method according to claim 1, wherein the step 1) is preceded by: the method comprises the following steps of establishing and training a target detection model, specifically comprising:
Step S1), establishing a target detection model, wherein the target detection model adopts a visual geometry group network, and converts a sixth full connection layer and a seventh full connection layer of the visual geometry group network into 2 convolutional layers by using an astraus algorithm by using the first 5 convolutional layers of the visual geometry group network; in addition, 3 convolution layers with different scales and 1 average pool layer are added, and different convolution layers are respectively used for predicting the offset of a default frame and the scores of different categories; obtaining a final target detection result through a non-maximum suppression algorithm;
Step S2), collecting typical water surface target images under various weather conditions and illumination conditions, and labeling each image; as training samples;
Step S3), the water surface target in each image of the training sample has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then the loss function is calculated end to end and propagated reversely, and the network parameters are adjusted by using a random gradient descent method to obtain a trained target detection model.
3. The image target recognition-based water surface target avoidance method according to claim 1, wherein before the step 1), a step of calibrating a 360-degree full-view camera system is further included, and specifically includes:
Step T1), respectively carrying out distortion correction on the 4 cameras, and splicing the images of the 4 cameras into full-view images after correction;
step T2) calibrates the relationship between the abscissa and the actual angle of the calibration object in the image: setting an origin of an abscissa of the full-view-angle image, and then shooting a plurality of known calibration objects with relative angles to the ship to obtain a calibration relation between the abscissa of the calibration objects in the image and an actual angle;
step T3) calibrating the relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image: shooting a skyline when the 360-degree full-view camera system is in a plane level, and measuring the distance between the 360-degree full-view camera system and the sea level to obtain a calibration relation between the installation height of the camera and the pixel distance between the skyline and a calibration object in an image;
step T4) of calibrating the proportional relationship between the pixel distance and the actual distance, the parameters f and x of the camera are calculated by the following formula:
Wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0for camera mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0Distance between the skyline and the ship:
wherein R is the radius of the earth.
4. The image target identification-based water surface target avoidance method according to claim 3, wherein the calibration relation between the abscissa and the actual angle of the calibration object in the image is as follows:
θ=am2+bm+c
Wherein, theta is the actual angle, m is the abscissa of the calibration object in the image, a, b and c are parameters, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
5. the image target identification-based water surface target avoidance method according to claim 3, wherein the camera mounting height and the pixel distance between the skyline and the calibration object in the image are in a calibration relation:
h0=e1h2+e2h+e3
wherein h is0for the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3an estimate of (d).
6. the image target identification-based water surface target avoidance method according to any one of claims 3 to 5, wherein the tilt correction specifically comprises: the image is rotated so that the skyline of the image remains horizontal and the ordinate of the rotated skyline is the mean of the ordinates of the skyline before rotation.
7. the image target identification-based water surface target avoidance method according to claim 6, wherein the step 3) specifically comprises:
Step 3-1) calculating the actual height h of the current camera1
Wherein the content of the first and second substances,The pixel distance between the obstacle and the skyline in the corrected image is obtained;
step 3-2) calculating the distance between the obstacle and the ship by adopting the following formula:
Wherein the content of the first and second substances,distance of the obstacle from the vessel;
Step 3-3) calculating a scaling factor of the obstacleAnd estimating the size of the obstacle;
Step 3-4) estimating the relative movement speed and the advancing direction of the obstacle and the ship by using the actual angle and distance change of the obstacle in continuous multi-frame images, thereby obtaining the movement track of the obstacle; wherein the actual angle of the obstacleComprises the following steps:
wherein the content of the first and second substances,the abscissa of the obstacle in the corrected image.
8. The image target recognition-based water surface target avoidance method according to claim 7, wherein the specific process of judging whether the moving track of the obstacle and the ship route have the possibility of collision is as follows:
Drawing a moving track of the barrier and a ship route, and judging whether an intersection point exists between the moving track and the ship route; if the intersection point does not exist, the obstacle and the ship have no possibility of collision; otherwise, the time of the obstacle and the time of the ship reaching the intersection point are respectively calculated, if the difference between the two times is larger than a first threshold value, the obstacle and the ship are not likely to collide, otherwise, the obstacle and the ship are likely to collide, and the value range of the first threshold value is 10-15 minutes.
9. The image target identification-based water surface target avoidance method according to claim 8, wherein the step 6) specifically comprises:
adjusting the running speed of the ship to enable the absolute value of the difference between the time of the ship before the adjustment and the time of the ship reaching the obstacle and the time of the ship after the adjustment and the time of the ship reaching the obstacle and the time of the ship crossing point to be greater than the preset time;
If the adjustment amount of the running speed of the ship is smaller than or equal to the second threshold value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the second threshold is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds a second threshold value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as a preset value and the current position of the ship.
CN201910738989.9A 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification Active CN110580043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910738989.9A CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910738989.9A CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Publications (2)

Publication Number Publication Date
CN110580043A true CN110580043A (en) 2019-12-17
CN110580043B CN110580043B (en) 2020-09-08

Family

ID=68810739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910738989.9A Active CN110580043B (en) 2019-08-12 2019-08-12 Water surface target avoidance method based on image target identification

Country Status (1)

Country Link
CN (1) CN110580043B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310646A (en) * 2020-02-12 2020-06-19 智慧航海(青岛)科技有限公司 Method for improving navigation safety based on real-time detection of remote images
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN112634661A (en) * 2020-12-25 2021-04-09 迈润智能科技(上海)有限公司 Intelligent berthing assisting method and system, computer equipment and storage medium
CN114253297A (en) * 2021-12-24 2022-03-29 交通运输部天津水运工程科学研究所 Method for actively and safely tracking tail gas of rotor unmanned aerial vehicle through ship tail gas detection
CN114593732A (en) * 2020-12-04 2022-06-07 财团法人船舶暨海洋产业研发中心 Ship auxiliary correction system and operation method thereof
CN114906290A (en) * 2022-06-18 2022-08-16 广东中威复合材料有限公司 Ferry with energy-saving ship body line structure and collision risk evaluation system thereof
CN116736867A (en) * 2023-08-10 2023-09-12 湖南湘船重工有限公司 Unmanned ship obstacle avoidance control system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09118298A (en) * 1995-04-13 1997-05-06 Sextant Avionique Photo-electronic device for assisting steering of aircraft understate of defective field of view
CN103697883A (en) * 2014-01-07 2014-04-02 中国人民解放军国防科学技术大学 Aircraft horizontal attitude determination method based on skyline imaging
CN107301646A (en) * 2017-06-27 2017-10-27 深圳市云洲创新科技有限公司 Unmanned boat intelligent barrier avoiding method and apparatus based on monocular vision
CN107305632A (en) * 2017-02-16 2017-10-31 武汉极目智能技术有限公司 Destination object distance measurement method and system based on monocular computer vision technique
CN109685858A (en) * 2018-12-29 2019-04-26 北京茵沃汽车科技有限公司 A kind of monocular cam online calibration method
CN109919026A (en) * 2019-01-30 2019-06-21 华南理工大学 A kind of unmanned surface vehicle local paths planning method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09118298A (en) * 1995-04-13 1997-05-06 Sextant Avionique Photo-electronic device for assisting steering of aircraft understate of defective field of view
CN103697883A (en) * 2014-01-07 2014-04-02 中国人民解放军国防科学技术大学 Aircraft horizontal attitude determination method based on skyline imaging
CN107305632A (en) * 2017-02-16 2017-10-31 武汉极目智能技术有限公司 Destination object distance measurement method and system based on monocular computer vision technique
CN107301646A (en) * 2017-06-27 2017-10-27 深圳市云洲创新科技有限公司 Unmanned boat intelligent barrier avoiding method and apparatus based on monocular vision
CN109685858A (en) * 2018-12-29 2019-04-26 北京茵沃汽车科技有限公司 A kind of monocular cam online calibration method
CN109919026A (en) * 2019-01-30 2019-06-21 华南理工大学 A kind of unmanned surface vehicle local paths planning method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
王贵槐 等: "基于深度学习的水面无人船前方船只图像识别方法", 《船舶工程》 *
赵蓬辉 等: "基于改进VGG网络的单阶段船舶检测算法", 《光电子激光》 *
邰晶 等: "最小二乘法的摄像机标定", 《现代计算机》 *
郑国书 等: "基于深度学习SSD模型的视频室内人数统计", 《工业控制计算机》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310646A (en) * 2020-02-12 2020-06-19 智慧航海(青岛)科技有限公司 Method for improving navigation safety based on real-time detection of remote images
CN111310646B (en) * 2020-02-12 2023-11-21 智慧航海(青岛)科技有限公司 Method for improving navigation safety based on real-time detection of remote images
CN114593732A (en) * 2020-12-04 2022-06-07 财团法人船舶暨海洋产业研发中心 Ship auxiliary correction system and operation method thereof
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN112634661A (en) * 2020-12-25 2021-04-09 迈润智能科技(上海)有限公司 Intelligent berthing assisting method and system, computer equipment and storage medium
CN114253297A (en) * 2021-12-24 2022-03-29 交通运输部天津水运工程科学研究所 Method for actively and safely tracking tail gas of rotor unmanned aerial vehicle through ship tail gas detection
CN114253297B (en) * 2021-12-24 2023-12-12 交通运输部天津水运工程科学研究所 Method for actively and safely tracking tail gas of ship tail gas detection rotor unmanned aerial vehicle
CN114906290A (en) * 2022-06-18 2022-08-16 广东中威复合材料有限公司 Ferry with energy-saving ship body line structure and collision risk evaluation system thereof
CN116736867A (en) * 2023-08-10 2023-09-12 湖南湘船重工有限公司 Unmanned ship obstacle avoidance control system
CN116736867B (en) * 2023-08-10 2023-11-10 湖南湘船重工有限公司 Unmanned ship obstacle avoidance control system

Also Published As

Publication number Publication date
CN110580043B (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN110580043B (en) Water surface target avoidance method based on image target identification
CN110517521B (en) Lane departure early warning method based on road-vehicle fusion perception
EP2771657B1 (en) Wading apparatus and method
KR20220155559A (en) Autonomous navigation method using image segmentation
US9665781B2 (en) Moving body detection device and moving body detection method
US9740942B2 (en) Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method
CN107728618A (en) A kind of barrier-avoiding method of unmanned boat
US20100208034A1 (en) Method and system for the dynamic calibration of stereovision cameras
US20220024549A1 (en) System and method for measuring the distance to an object in water
CN110298216A (en) Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness
CN107479032B (en) Object detection system for an automated vehicle
CN110738081B (en) Abnormal road condition detection method and device
CN112581795B (en) Video-based real-time early warning method and system for ship bridge and ship-to-ship collision
CN109753841B (en) Lane line identification method and device
CN110658826A (en) Autonomous berthing method of under-actuated unmanned surface vessel based on visual servo
CN105300390B (en) The determination method and device of obstructing objects movement locus
US6956959B2 (en) Apparatus for recognizing environment
CN109410598B (en) Traffic intersection congestion detection method based on computer vision
CN114913494B (en) Self-diagnosis calibration method for risk assessment of automatic driving visual perception redundant system
CN117078953A (en) Bridge ship collision prevention lightweight multi-stage early warning method based on visual images
US11892854B2 (en) Assistance system for correcting vessel path and operation method thereof
CN113112520A (en) Unmanned aerial vehicle turning jelly effect processing method and system based on artificial intelligence
CN114089299A (en) Marine target detection and identification method based on situation awareness multi-source sensor linkage
KR102497488B1 (en) Image recognition apparatus for adjusting recognition range according to driving speed of autonomous vehicle
CN105785990A (en) Ship parking system based on panoramic viewing and barrier identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231007

Address after: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee after: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Patentee before: Institute of Oceanology Chinese Academy of Sciences

Effective date of registration: 20231007

Address after: 311800 Room 101, 1st floor, 22 Zhongjie building, 78 Zhancheng Avenue, Taozhu street, Zhuji City, Shaoxing City, Zhejiang Province

Patentee after: Zhejiang wanghaichao Technology Co.,Ltd.

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES