Disclosure of Invention
The invention aims to overcome the technical defects and provides a water surface target avoidance method based on image target identification. Compared with the prior art, the invention has the advantages of large monitoring range, low algorithm complexity, simple hardware structure, low cost and the like, and can be widely applied to ship obstacle avoidance, unmanned ship automatic driving and the like.
In order to achieve the above object, the present invention provides a method for avoiding a water surface target based on image target recognition, which is implemented by installing a 360-degree full-view camera system on a sailing ship, wherein the camera system comprises: 4 cameras arranged on the same plane at the highest position of the ship body in 4 directions, namely front, back, left and right directions, wherein the method comprises the following steps:
step 1) acquiring images of the water surface around a ship through a 360-degree full-view camera system, and performing distortion correction and inclination correction on the images;
step 2) identifying the skyline in the corrected image and whether an obstacle exists or not by using the target detection model, if the obstacle exists in the image, entering the step 3), and if not, entering the step 5);
step 3) calculating the distance and the size of the obstacle by using the skyline and the position of the obstacle in the image, and estimating the moving track of the obstacle by using the multi-frame image;
step 4) judging whether the obstacle and the ship are likely to collide, if so, turning to step 6); otherwise, turning to step 5);
step 5) judging whether a navigation line is planned after the obstacle avoidance operation, and if so, keeping the original driving speed or navigation direction; otherwise, replanning the route to the destination, and driving according to the new route;
and 6) adjusting the ship route to keep the safe distance between the ship and the obstacle.
As a modification of the above method, the step 1) is preceded by: the method comprises the following steps of establishing and training a target detection model, specifically comprising:
step S1), establishing a target detection model, wherein the target detection model adopts a visual geometry group network, and converts a sixth full connection layer and a seventh full connection layer of the visual geometry group network into 2 convolutional layers by using an astraus algorithm by using the first 5 convolutional layers of the visual geometry group network; in addition, 3 convolution layers with different scales and 1 average pool layer are added, and different convolution layers are respectively used for predicting the offset of a default frame and the scores of different categories; obtaining a final target detection result through a non-maximum suppression algorithm;
step S2), collecting typical water surface target images under various weather conditions and illumination conditions, and labeling each image; as training samples;
step S3), the water surface target in each image of the training sample has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then the loss function is calculated end to end and propagated reversely, and the network parameters are adjusted by using a random gradient descent method to obtain a trained target detection model.
As an improvement of the above method, before the step 1), a step of calibrating a 360-degree full-view imaging system is further included, specifically including:
step T1), respectively carrying out distortion correction on the 4 cameras, and splicing the images of the 4 cameras into full-view images after correction;
step T2) calibrates the relationship between the abscissa and the actual angle of the calibration object in the image: setting an origin of an abscissa of the full-view-angle image, and then shooting a plurality of known calibration objects with relative angles to the ship to obtain a calibration relation between the abscissa of the calibration objects in the image and an actual angle;
step T3) calibrating the relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image: shooting a skyline when the 360-degree full-view camera system is in a plane level, and measuring the distance between the 360-degree full-view camera system and the sea level to obtain a calibration relation between the installation height of the camera and the pixel distance between the skyline and a calibration object in an image;
step T4) of calibrating the proportional relationship between the pixel distance and the actual distance, the parameters f and x of the camera are calculated by the following formula:
wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0For camera mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0Distance between the skyline and the ship:
wherein R is the radius of the earth.
As an improvement of the above method, a calibration relation between an abscissa and an actual angle of the calibration object in the image is:
θ=am2+bm+c
wherein, theta is the actual angle, m is the abscissa of the calibration object in the image, a, b and c are parameters, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
As an improvement of the above method, the calibration relationship between the camera installation height and the pixel distance between the skyline and the calibration object in the image is as follows:
h0=e1h2+e2h+e3
wherein h is0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3An estimate of (d).
As an improvement of the above method, the tilt correction is specifically: the image is rotated so that the skyline of the image remains horizontal and the ordinate of the rotated skyline is the mean of the ordinates of the skyline before rotation.
As a modification of the above method, the step 3) specifically includes;
step 3-1) calculating the actual height h of the current camera1:
Wherein,
the pixel distance between the obstacle and the skyline in the corrected image is obtained;
step 3-2) calculating the distance between the obstacle and the ship by adopting the following formula:
wherein,
distance of the obstacle from the vessel;
step 3-3) calculating a scaling factor of the obstacle
And estimating the size of the obstacle;
step 3-4) estimating the relative movement speed and the advancing direction of the obstacle and the ship by using the actual angle and distance change of the obstacle in continuous multi-frame images, thereby obtaining the movement track of the obstacle; wherein the actual angle of the obstacle
Comprises the following steps:
wherein,
the abscissa of the obstacle in the corrected image.
As an improvement of the above method, the specific process of judging whether there is a possibility of collision between the moving track of the obstacle and the ship route is as follows:
drawing a moving track of the barrier and a ship route, and judging whether an intersection point exists between the moving track and the ship route; if the intersection point does not exist, the obstacle and the ship have no possibility of collision; otherwise, the time of the obstacle and the time of the ship reaching the intersection point are respectively calculated, if the difference between the two times is larger than a first threshold value, the obstacle and the ship are not likely to collide, otherwise, the obstacle and the ship are likely to collide, and the value range of the first threshold value is 10-15 minutes.
As an improvement of the above method, the step 6) specifically includes:
adjusting the running speed of the ship to enable the absolute value of the difference between the time of the ship before the adjustment and the time of the ship reaching the obstacle and the time of the ship after the adjustment and the time of the ship reaching the obstacle and the time of the ship crossing point to be greater than the preset time;
if the adjustment amount of the running speed of the ship is smaller than or equal to the second threshold value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the second threshold is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds a second threshold value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as a preset value and the current position of the ship.
Compared with the prior art, the invention has the advantages that:
1. compared with the obstacle avoidance technology based on the binocular, the method has the advantages that the environment is monitored by using the 360-degree full-view-angle camera system, the monitoring range is larger, the 360-degree collision avoidance can be realized, meanwhile, the distance, the size, the direction, the speed and the like of dynamic obstacles such as ships and the like are measured and calculated by adopting an image recognition method, the method is simpler, and the response is quicker;
2. compared with the obstacle avoidance technology based on radar, the 360-degree full-view camera system can provide more environment and obstacle information, so that the obstacle judgment is more accurate;
3. compared with the obstacle avoidance technology based on multiple sensors, the obstacle avoidance method has the advantages that the hardware structure is simpler, and the cost is low;
4. compared with the calculation of the distance of the obstacle based on two or more eyes, the method realizes the monocular distance calculation of the obstacle by using the skyline, and has lower hardware complexity and cost.
Detailed Description
The technical solutions of the present invention are further described below with reference to the drawings and examples, but the embodiments of the present invention are not limited thereto.
The embodiment of the invention is applied to an unmanned ship capable of autonomous navigation, 1 camera is respectively arranged on the same plane of the highest position of a ship body in the front, back, left and right directions, and the 360-degree full view angle around the ship can be covered after the view fields of the 4 cameras are combined. The above embodiment implements the method for avoiding the water surface target based on image target recognition by using the full-view camera system combined by the above 4 cameras, which specifically includes the following steps, and the flow of the method is shown in fig. 1:
step 1: and establishing a target detection model for identifying the water surface target image, and calibrating the 360-degree full-view camera system.
A target detection model for identifying a water surface target image is established by adopting an SSD (Single Shot Multi Box Detector) target detection model in deep learning. The basic network of the SSD model adopts a VGG16(visual geometry Group) network; fc6 (6 th fully connected layer) and fc7 (7 th fully connected layer) of the VGG are converted to 2 convolutional layers using the astraus algorithm using the first 5 sets of convolutional layers of the VGG. In addition, 3 convolution layers with different scales and 1 average pool layer are additionally added, and the different convolution layers are respectively used for predicting the offset of the default frame and the scores of different categories. And finally, obtaining a final detection result through a non-maximum suppression algorithm. In the above embodiment, typical water surface target images under various weather conditions and lighting conditions are collected first, and each image is labeled. And (3) the water surface target in each image has a corresponding label, the label is assigned to a specific output of a fixed detector output set, then a loss function is calculated end to end and propagated in the opposite direction, network parameters are adjusted by using a random gradient descent method, and finally a trained target detection model is obtained.
The calibration of the 360-degree full-view camera system comprises the following steps:
step 1.1: the distortion of the imaging system is corrected.
In the above embodiment, distortion correction is performed on each camera, and then the images of the 4 cameras are spliced into a full view image.
Step 1.2: and calibrating the relation between the abscissa and the actual angle of the object in the image.
In the above embodiment, the origin of the abscissa of the full-view image is set first, and then a plurality of calibration objects with known angles relative to the unmanned ship are photographed, so as to obtain the calibration relationship between the abscissa of the object in the image and the actual angle. In the above embodiments, the quadratic function θ ═ am is used in the above embodiments2And + bm + c to fit the relation between the abscissa and the actual angle in the image, wherein theta is the actual angle, m is the abscissa of the object in the image, and the estimated values of the parameters a, b and c can be obtained by adopting a least square method.
Step 1.3: and calibrating the relation between the height of the skyline in the image and the installation height of the camera system.
In the above embodiment, the skyline is photographed when the camera system is located on the horizontal plane, and the distance between the camera system and the sea level at this time is measured, so as to obtain the calibration relationship between the installation height of the camera system and the position of the photographed skyline in the image. In the above embodiments, the quadratic function h is adopted0=e1h2+e2h+e3To fit a calibrated relationship between the height at which the camera system is mounted and the position of the photographed skyline in the image, where h0For the installation height of the camera, h is the pixel distance between the skyline and the calibration object in the image, and the parameter e can be obtained by adopting a least square method1、e2And e3An estimate of (d).
Step 1.4: in step 1.4, the proportional relationship between the pixel distance and the actual distance is calibrated, and the parameters f and x of the camera are calculated by the following formula:
wherein f is the distance between the optical center of the camera and the imaging plane, x is the distance between the skyline and the center of the imaging plane on the imaging plane, and h0For taking picturesHead mounting height, d1Is the distance between the object to be calibrated and the ship, h is the pixel distance between the skyline and the object to be calibrated in the image, d0For the distance between the skyline and the ship, the following formula is adopted for calculation
Wherein R is the radius of the earth.
Step 2: when the ship sails, the 360-degree full-view-angle camera system arranged on the ship is used for collecting the images of the water surface around the ship and correcting the distortion and the inclination of the images.
The distortion correction for the image is the same as step 1.1. The tilt correction of the image is performed by rotating the image so that the skyline therein is kept horizontal, and making the ordinate of the rotated skyline the mean of the ordinates of the skyline before the rotation.
And step 3: and (5) identifying the skyline in the image and whether the obstacle exists or not by using the target detection model established in the step (1), if the obstacle does not exist in the image, turning to the step (5), otherwise, entering the step (4).
And detecting the obstacle through the trained SSD network, inputting the shot image into the trained SSD network to obtain a detection result of whether the obstacle exists, and obtaining the position of the obstacle in the image when the obstacle exists.
And 4, step 4: the distance and size of the obstacle are estimated using the skyline and the position of the object in the image, and the speed and direction of travel of the obstacle are estimated using the multiple frames of images.
The distance and size of the obstacle are estimated using the following method:
step 4.1: calculating the height h of the current camera according to the relation between the height of the skyline in the image and the installation height of the camera1:
Wherein,
the pixel distance between the obstacle and the skyline in the corrected image is obtained;
h1and (4) calculating the relationship between the height of the skyline in the image and the installation height of the camera system, which is calibrated in the step 1.3.
Step 4.2: calculating the distance to the obstacle using the following equation
Wherein
Is the distance of the obstacle from the vessel.
Step 4.3: calculating a scaling factor for an obstacle
And estimate the size of the obstacle.
And estimating the relative speed and the traveling direction of the obstacle and the unmanned ship by using the change of the angle and the distance of the obstacle in the continuous multi-frame images.
And 5: if no obstacle exists or the moving track of the obstacle has no possibility of colliding with the ship route, and the original running speed or the original running direction is kept when no obstacle avoidance operation is performed or the route is planned after the obstacle avoidance; if the moving track of the obstacle does not collide with the ship route, and the situation that the route is not planned after obstacle avoidance operation exists, replanning the route to the destination, and driving according to a new route; if the moving track of the obstacle has the possibility of collision with the ship route, the running speed or the sailing direction of the ship is adjusted, so that the safe distance is kept between the ship route and the obstacle.
The following method is adopted to judge whether the moving track of the barrier and the ship route have the possibility of collision or not: and calculating the moving track of the barrier and the ship route according to the moving speed and the moving direction of the barrier, and judging whether the barrier and the ship route have an intersection. If the intersection point does not exist, the moving track of the barrier and the ship route have no possibility of collision; otherwise, respectively calculating the time of the barrier and the time of the ship reaching the intersection point, if the time difference between the barrier and the ship reaching the intersection point is larger than a preset value, the moving track of the barrier and the ship route has no possibility of collision, otherwise, the moving track of the barrier and the ship route have the possibility of collision; the preset value ranges from 10 to 15 minutes.
When the moving track of the obstacle has the possibility of collision with the ship route, firstly, adjusting the running speed of the ship to ensure that the absolute value of the difference between the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route before adjustment and the time when the ship reaches the intersection point of the moving track of the obstacle and the ship route after adjustment is greater than the preset time;
if the adjustment quantity of the running speed of the ship is smaller than or equal to the preset value, keeping the course unchanged, and sailing the ship according to the adjusted running speed; the value range of the preset value is as follows: 30% -50% of the normal running speed;
if the adjustment quantity of the running speed of the ship exceeds the preset value, the original running speed is kept, and the course of the ship is adjusted to be the shortest tangent of a circle with the intersection point as the center of a circle and the radius as the preset value and the current position of the ship.
Step 6: and if the ship receives a command of stopping sailing, stopping moving, and otherwise, continuing to execute the step 2.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.