CN116310304A - Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium - Google Patents

Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium Download PDF

Info

Publication number
CN116310304A
CN116310304A CN202211106216.7A CN202211106216A CN116310304A CN 116310304 A CN116310304 A CN 116310304A CN 202211106216 A CN202211106216 A CN 202211106216A CN 116310304 A CN116310304 A CN 116310304A
Authority
CN
China
Prior art keywords
image
water area
pixel point
obstacle
water
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211106216.7A
Other languages
Chinese (zh)
Inventor
汪洋
周润东
高玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202211106216.7A priority Critical patent/CN116310304A/en
Publication of CN116310304A publication Critical patent/CN116310304A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

A training method of a water area image segmentation model comprises the following steps: acquiring a water area sample image; acquiring a label image corresponding to a water area sample image; obtaining the prediction probability that each pixel point in a water area sample image belongs to each classification result; obtaining a classification result of each pixel point in the label image; obtaining barrier distribution weights of the pixel points according to the distance between any pixel point and the corresponding waterline position point; obtaining obstacle distribution weighting loss according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image and the prediction probability of each pixel point belonging to each classification result in the water area sample image; and adjusting parameters of the image segmentation model at least according to the barrier distribution weighted loss. And the association between the distribution rule of the obstacle and the waterline is considered, so that the detection precision of the small obstacle or the fuzzy target is improved. The invention also provides a water area image segmentation method and a storage medium.

Description

Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium
Technical Field
The invention relates to the technical field of image processing, in particular to a water area image segmentation method, a segmentation model training method and a medium.
Background
Autonomous traffic in water is a future development trend, and can realize automatic driving as unmanned vehicles are on land. For example, in the autonomous traffic of a water area in a marine scene, the automatic cruising of the unmanned ship can be realized, and the marine scene segmentation is used as one of the most basic unmanned sensing technologies, so that the unmanned ship can be helped to master the traffic area and the obstacle information, and the safety obstacle avoidance and navigation of the unmanned ship can be ensured.
The current ocean scene segmentation method is generally realized by two steps of water area detection and obstacle extraction, firstly, the water area space is extracted by utilizing an edge feature or probability map model, and then non-water pixels are extracted in the water area constraint range to serve as obstacles, but the method is easy to be interfered by the environment and can not detect the obstacles above a water line. In this regard, there are some deep learning-based ocean scene segmentation methods, which learn rich features by using a large amount of training data and convolution operation, so that pixel-level ocean scene segmentation is possible. However, the deep learning-based ocean scene segmentation method is difficult to detect small obstacles or fuzzy obstacles in the ocean scene at a distance, so that the subsequent planning of the unmanned ship on the passing area is affected.
Therefore, there is much room for improvement in the marine scene segmentation method.
Disclosure of Invention
The invention mainly solves the technical problem of how to improve the detection precision of small obstacle and fuzzy obstacle at a distance when the image of a water area is segmented.
According to a first aspect, in one embodiment, a method for training a segmentation model of a water area image is provided, including:
acquiring a water area sample image;
acquiring a label image corresponding to the water area sample image, wherein the label image is an image of each pixel point on the water area sample image marked with a classification result, and the classification result at least comprises water and barriers;
inputting the water area sample image into an image segmentation model, and obtaining the prediction probability that each pixel point in the water area sample image belongs to each classification result;
obtaining a classification result of each pixel point in the label image;
for any pixel point of the classification result in the water area of the tag image, which belongs to the obstacle, acquiring a corresponding waterline position point, and acquiring an obstacle distribution weight of the pixel point according to the distance between the any pixel point and the waterline position point corresponding to the pixel point;
obtaining barrier distribution weighted loss at least according to classification results of all pixel points in the label image, barrier distribution weights of all pixel points of the barrier to which the classification results belong in a water area of the label image, and prediction probabilities of all pixel points of the water area sample image to which the classification results belong;
And adjusting parameters of the image segmentation model at least according to the barrier distribution weighted loss until the image segmentation model converges to obtain a trained image segmentation model.
In some embodiments, the obtaining the corresponding waterline location point for any pixel point of the obstacle of the classification result in the water area of the tag image includes:
for any pixel point of the classification result in the water area of the tag image, which belongs to an obstacle, acquiring a column of pixels where the pixel point is located;
and obtaining pixel points which are not in water in the classification result in the row of pixels from top to bottom until the pixel points which are in water are obtained, taking the pixel points which are in water as waterline position points corresponding to any pixel point, and positioning the corresponding waterline position points above any pixel point.
In some embodiments, the obtaining the barrier distribution weight of the pixel according to the distance between the arbitrary pixel and the corresponding waterline position point includes:
acquiring a corresponding probability density function based on the position point of the corresponding waterline;
calculating the value of the probability density function corresponding to any pixel point according to the distance between the any pixel point and the corresponding waterline position point;
And distributing the weight of the barrier according to the value of the probability density function.
In some embodiments, the probability density function follows a gaussian distribution, and when calculating the value of the probability density function corresponding to any pixel point, the probability density function is obtained by the following formula:
Figure BDA0003841480620000021
Figure BDA0003841480620000022
wherein p is i (x i ,y i ) D, coordinates of any pixel point of the classification result belonging to the obstacle in the water area of the label image i For any pixel point p i (x i ,y i ) Corresponding to the waterline position point
Figure BDA0003841480620000023
The distance between them, sigma is standard deviation, +.>
Figure BDA0003841480620000024
Is the mean value, y all Is the total number of rows of pixels in the water sample image.
In some embodiments, the standard deviation σ is:
Figure BDA0003841480620000025
in some embodiments, the training method of the water area image segmentation model further comprises:
obtaining corresponding obstacle distribution weights of any pixel points of the classification result belonging to the obstacle in the non-water area of the tag image;
and obtaining the obstacle distribution weighted loss at least according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the non-water area of the tag image, and the prediction probability of each pixel point belonging to each classification result in the water area sample image.
In some embodiments, the image segmentation model includes a convolutional neural network and a contextual prior layer, the method further comprising:
performing downsampling treatment on the water area sample image through the convolutional neural network layer to obtain a depth feature map of the water area sample image;
performing contextual feature extraction on the depth feature map of the water area sample image through the contextual priori layer to obtain a priori feature map of the water area sample image;
acquiring an ideal affinity graph according to the label image;
determining an affinity loss from the prior signature and the ideal affinity map;
and adjusting parameters of the image segmentation model according to the barrier distribution weighted loss and the affinity loss.
In some embodiments, the image segmentation model includes a convolutional neural network and a detail head, the method further comprising:
performing downsampling treatment on the water area sample image through the convolutional neural network layer to obtain a detail feature map of the water area sample image;
performing Laplace convolution with steps of 1,2,4 and 8 on the label image to obtain 4 soft edge images, up-sampling part of the soft edge images, and merging the images of the 4 soft edge images to obtain an edge image;
Carrying out detail extraction on the detail feature images of the water area sample images through the detail heads, and obtaining detail images with the same size as the edge images;
determining detail loss according to the edge graph and the detail graph;
and adjusting parameters of the image segmentation model according to the barrier distribution weighted loss and the detail loss.
According to a second aspect, in one embodiment there is provided a method of water image segmentation comprising:
acquiring a water area image to be segmented;
the image segmentation model trained based on the method of the first aspect is used for segmenting the water area image to be segmented, and the segmented water area image is obtained.
According to a third aspect, an embodiment provides a computer readable storage medium having stored thereon a program executable by a processor to implement the method according to the first or second aspect.
According to the training method, the image segmentation method and the medium of the water area image segmentation model in the embodiment, any pixel point of the classification result belonging to the obstacle in the water area of the label image is obtained, and the obstacle distribution weight of the pixel point is obtained according to the distance between the any pixel point and the corresponding waterline position point. And then calculating the obstacle distribution weighting loss according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image, and the prediction probability of each pixel point belonging to each classification result in the water area sample image. Finally, parameters of the image segmentation model are adjusted according to the barrier distribution weighting loss, and the trained image segmentation model is obtained. Because the relevance between the distribution rule of the obstacle and the waterline is considered, namely the probability of the obstacle appearing in the water area at the place closer to the waterline is higher, and the position closer to the waterline is usually the distance from the position point of the corresponding waterline in the ocean scene, when the distance between any pixel point belonging to the obstacle in the water area and the position point of the corresponding waterline is smaller, the obstacle distribution weight of the pixel point is larger, so that more sensitive obstacle detection can be performed on the pixel point, and the detection precision of small obstacle or fuzzy targets is improved.
Drawings
FIG. 1 is a flow chart of a training method of a water area image segmentation model according to an embodiment;
FIG. 2 is a flowchart of another embodiment training method of a segmentation model of a water area image;
FIG. 3 is a flowchart of a training method of a segmentation model of a water area image according to yet another embodiment;
FIG. 4 is a block diagram of an image segmentation model of an embodiment;
FIG. 5 is a block diagram of the structure of a context prior layer of one embodiment;
FIG. 6 is a flow chart of obtaining a desired affinity graph in one embodiment;
FIG. 7 is a flow chart of a loss of acquisition detail in one embodiment.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The terms "coupled" and "connected," as used herein, are intended to encompass both direct and indirect coupling (coupling), unless otherwise indicated.
The existing ocean scene segmentation method does not consider the particularity of the water area environment, such as the ocean environment, the inland environment or the lake environment, and the like, in the open water area where the ship can travel, and especially does not consider the specificity of the distribution rule of the obstacles such as the ship on the water surface in the water area environment, so that the remote small obstacles in the ocean scene are difficult to detect. In addition, the climate in the water area environment is worse, rain and fog weather is quite common, and the current ocean scene segmentation method is difficult to accurately detect the fuzzy obstacle at a distance.
In the embodiment of the invention, the relevance between the distribution rule of the obstacle and the waterline is considered, namely, the probability of the obstacle appearing in the water area at the place closer to the waterline is higher, and the position closer to the waterline is usually the distance in the ocean scene, so that the classification result in the water area of the label image belongs to any pixel point of the obstacle, and the obstacle distribution weight of the pixel point is obtained according to the distance between the any pixel point and the position point of the waterline corresponding to the pixel point. And obtaining barrier distribution weighted loss according to classification results of all pixel points in the label image, barrier distribution weights of all pixel points of the classification results in the water area of the label image, and prediction probabilities of all pixel points in the water area sample image, and finally adjusting parameters of the image segmentation model according to the barrier distribution weighted loss. In the barrier distribution weighting loss, the pixel points which are close to the waterline position point and belong to the barrier in the water area can be endowed with larger barrier distribution weight, so that the detection precision of the remote small barrier and the fuzzy barrier in the water area environment can be improved.
In some embodiments, a training method of a water area image segmentation model is provided, which can input the image segmentation model through a water area sample image, and then adjust parameters of the image segmentation model according to a label image corresponding to the water area sample image, so as to complete training of the image segmentation model. The water area sample image can be a water area of a marine environment or a water area of a inland environment, and the trained image segmentation model can segment corresponding water area images, so that the water belonging to the water area images and the non-water belonging to the non-water parts are distinguished, collision-free water area detection in the water area images is further realized, unmanned ships are helped to master the traffic area and obstacle information, and safety obstacle avoidance and navigation of the unmanned ships are guaranteed.
Referring to fig. 1, the following specifically describes a training method of a water area image segmentation model:
step 100: and acquiring a water area sample image.
Step 200: and acquiring a label image corresponding to the water area sample image, wherein the label image is an image of each pixel point on the water area sample image marked with a classification result, and the classification result at least comprises water and barriers.
In some embodiments, the water sample image is used to train the image segmentation model in an input image segmentation model, which may be a water image of a marine environment or a water image of a inland environment, which typically includes a portion that is water and a portion that is non-water.
In some embodiments, the label image may also be referred to as a truth label or ground truth. The label image is obtained by labeling a true value on the water area sample image, namely labeling each pixel point in the water area sample image with the category of the pixel point. For example, in a water image, all pixels in the water image are labeled as belonging to both water and non-water categories, while objects such as aquatic weeds, buoys, piers, other vessels, etc. that occur in the water surface area are labeled as non-water categories. In some embodiments, the categories that belong to non-water may include obstacles and sky, e.g., objects that occur in the waterline region may be marked as obstacles, and non-water regions, as well as non-obstacles, may be marked as sky.
In some embodiments, the target sample image may be annotated using a manual annotation. The labeled label image can also be directly obtained.
Step 300: inputting the water area sample image into an image segmentation model, and obtaining the prediction probability that each pixel point in the water area sample image belongs to each classification result.
Step 400: and obtaining a classification result of each pixel point in the label image.
In some embodiments, in order to improve accuracy of classification results of each pixel point in the target sample image when the target sample image is segmented, classification results marked by each pixel point in the label image, that is, a true value, may be obtained, and in the image segmentation model, a prediction probability, that is, a prediction probability value, of each classification result of the corresponding pixel point, where the true value and the prediction probability of the classification of the pixel point may be used to calculate classification loss for the image segmentation model, so that the image segmentation model may accurately classify each pixel point in the target sample image.
Step 500: and obtaining a corresponding waterline position point of any pixel point of the classification result belonging to the obstacle in the water area of the label image, and obtaining the obstacle distribution weight of the pixel point according to the distance between the any pixel point and the corresponding waterline position point.
In some embodiments, since the obstacle distribution on the water surface is not irregular in the water area image, when the classification loss is calculated, if the weights of the pixel points of each classification result are set to be the same, when the region with high obstacle distribution probability is segmented, the segmentation accuracy is not high enough, and some small obstacles or fuzzy targets are easily missed. In some embodiments, when researching the distribution rule of the obstacle on the water surface, the closer to the waterline is considered, namely, the smaller the distance between the pixel point of the area and the corresponding waterline position point is, the greater the possibility of the obstacle. Therefore, for any pixel point of the obstacle of the classification result in the water area of the label image, the obstacle distribution weight of the pixel point is obtained according to the distance between the any pixel point and the corresponding waterline position point. In some embodiments, because the specificity of the marine environment or the inland environment is considered, especially the specificity of the distribution rule of the obstacles such as the ship is considered, the detection intensity needs to be improved for the area with high obstacle distribution probability, when the distance between any pixel point and the corresponding waterline position point is smaller, the obstacle distribution weight of the pixel point is larger, so that more sensitive obstacle detection can be performed on the pixel point, and the detection precision of small obstacles or fuzzy targets is improved.
In some embodiments, a classification result of each pixel point label in the label image may be obtained first, and then, according to the pixel points of the classification result belonging to water, a waterline in the label image is obtained, that is, a boundary of a water area in the water area image is obtained, and a pixel point in the waterline, which is in the same column of pixels as any pixel point, is a waterline position point corresponding to the any pixel point. Therefore, when the distance between any pixel point and the corresponding waterline position point is calculated, the linear distance between the two points is calculated.
Step 600: and obtaining barrier distribution weighting loss at least according to the classification result of each pixel point in the label image, the barrier distribution weight of each pixel point of the barrier to which the classification result in the water area of the label image belongs, and the prediction probability of each pixel point of the water area sample image to which each classification result belongs.
Step 700: and adjusting parameters of the image segmentation model at least according to the barrier distribution weighted loss until the image segmentation model converges to obtain a trained image segmentation model.
In some embodiments, a classification loss function of the image segmentation model may be designed first, and a classification result in a water area of the tag image may be assigned to a barrier distribution weight of each pixel of the barrier, where the classification result in the classification loss function may be assigned to a corresponding pixel of the barrier, and then the barrier distribution weighted loss of the image segmentation model may be obtained according to the classification result of each pixel in the tag image and a prediction probability of each pixel in the water area sample image that is assigned to each classification result. Since the calculation of the loss function has no influence on the reasoning speed of the image segmentation model, the parameters of the image segmentation model are adjusted at least according to the barrier distribution weighted loss, and the convergence of the image segmentation model can be determined through the barrier distribution weighted loss, for example, batch training is performed in the training process, and the image segmentation model can be considered to be converged when the barrier distribution weighted loss is not reduced any more, so as to obtain the trained image segmentation model.
According to the embodiment, before the classification loss of the image segmentation model is calculated, any pixel point of the classification result belonging to the obstacle in the water area of the tag image is obtained according to the obstacle distribution rule on the water surface, and the obstacle distribution weight of the pixel point is obtained according to the distance between the any pixel point and the corresponding waterline position point. And then calculating the obstacle distribution weighting loss according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image, and the prediction probability of each pixel point belonging to each classification result in the water area sample image. As the smaller the distance between any pixel point and the corresponding waterline position point is, the larger the obstacle distribution weight of the pixel point is, so that more sensitive obstacle detection can be carried out on the pixel point, and the detection precision of small obstacles or fuzzy targets can be improved.
According to the above embodiment, in this embodiment, according to the correlation between the obstacle and the waterline, the obstacle distribution weight is obtained by calculating the distance between the pixel point belonging to the obstacle in the water area and the position point of the waterline, that is, the closer to the pixel point of the waterline, the greater the obstacle distribution weight is as the corresponding classification result, so as to ensure accurate detection of the obstacle. In some embodiments, the rule is also applicable to the classification result of water, that is, the obstacle distribution weight can be set as the classification weight of the classification result water, that is, the closer to the waterline position point and the pixel point belonging to the water, the greater the weight of the corresponding classification result is the water, so that both the obstacle and the water can be ensured to be accurately detected. Namely, for any pixel point of the label image, the classification result of which belongs to water, acquiring a corresponding waterline position point of the pixel point, and acquiring the weight of the pixel point according to the distance between the any pixel point and the corresponding waterline position point. And then obtaining the barrier distribution weighting loss according to the classification result of each pixel point in the label image, the barrier distribution weight of each pixel point belonging to the barrier of the classification result in the water area of the label image, the weight of each pixel point belonging to the water, and the prediction probability of each pixel point belonging to each classification result in the water area sample image.
Referring to fig. 2, in some embodiments, in any pixel point of the obstacle of the classification result in the water area of the tag image, a corresponding waterline position point is obtained, and the specific method includes:
step 510: and obtaining a column of pixels where any pixel point of the classification result in the water area of the tag image belongs to the obstacle.
Step 520: and obtaining pixel points which are not in water in the classification result in the row of pixels from top to bottom until the pixel points which are in water are obtained, taking the pixel points which are in water as waterline position points corresponding to any pixel point, and positioning the corresponding waterline position points above any pixel point.
In some embodiments, for any water image, the water image is considered to be divided into an upper portion and a lower portion by the waterline, wherein the waterline and the lower portion are water areas, and the upper portion is a non-water area. Therefore, for any pixel point in the water area of the tag image, the classification result belongs to the obstacle, a row of pixels is acquired firstly, then the pixel points, in which the classification result does not belong to water, in the row of pixels are acquired from top to bottom, and the acquired first pixel point is non-water because the classification result is acquired from top to bottom. In some embodiments, after the rotation of the water area image, the acquired row of pixels may be changed to acquire a row of pixels, and the pixel points of the classification result in the row of pixels, which do not belong to water, may be acquired from top to bottom, from left to right, or from right to left, and the corresponding waterline position point may be located below, right or left of any pixel point.
Referring to fig. 3, in some embodiments, when obtaining the obstacle distribution weight of the pixel according to the distance between the arbitrary pixel and the corresponding waterline position point, the specific method includes:
step 530: and acquiring a corresponding probability density function based on the corresponding waterline position point.
Step 540: and calculating the value of the probability density function corresponding to any pixel point according to the distance between the any pixel point and the corresponding waterline position point.
Step 550: and distributing the weight of the barrier according to the value of the probability density function.
In some embodiments, the probability density function corresponding to the waterline position point is designed based on the waterline position point, and the probability density function is characterized by: when an obstacle appears in the pixel points corresponding to the waterline position points, the probability that the obstacle appears in each pixel point. For the area closer to the waterline, namely, the smaller the distance between the pixel point of the area and the corresponding waterline position point is, the greater the possibility of occurrence of the obstacle is, so that the value of the corresponding probability density function is also greater, and finally, the corresponding obstacle distribution weight is obtained according to the value of the probability density function and is used for calculating the distribution loss. In some embodiments, the probability density function may be derived from data statistics.
In some embodiments, the probability density function follows a gaussian distribution and is derived by the following formula:
Figure BDA0003841480620000091
wherein p is i (x i ,y i ) The classification result in the water area for the label image belongs toCoordinates of any pixel point of the obstacle, d i For any pixel point p i (x i ,y i ) Corresponding to the waterline position point
Figure BDA0003841480620000092
The distance between them, sigma is standard deviation, +.>
Figure BDA0003841480620000093
Is the mean value.
In some embodiments, the standard deviation σ is:
Figure BDA0003841480620000094
wherein y is all Is the total number of rows of pixels in the target sample image.
As can be seen from the above expression of the probability density function, the position of the obstacle distribution is subjected to gaussian distribution based on the waterline position points, and therefore the mean value of the probability density function is the row coordinate of the corresponding waterline position point. The value of the probability density function is obtained based on the waterline position points, so that the probability density function dynamically changes based on the waterline position points.
In this embodiment, the row coordinates of the pixel points in the tag image and the water area sample image are all calculated from top to bottom, that is, the row coordinate of the uppermost pixel point is the first row, so that it can be known from the standard deviation of the probability density function that most of the obstacle appears at the position from the waterline to about 1/8 of the bottom of the image, that is, the value corresponding to the standard deviation. The standard deviation is set mainly according to the result statistics rule of the collected data set, and the value can be adaptively adjusted according to specific application scenes.
In some embodiments, when calculating the obstacle distribution weight loss, only the obstacle distribution weight of each pixel point of the obstacle, which is the classification result in the water area of the tag image, needs to be acquired. Therefore, all the classification results in the water area can be acquired firstly to belong to each pixel point of the obstacle, and then the obstacle distribution weight of each acquired pixel point is calculated so as to be used for the obstacle distribution weight loss. Or firstly obtaining probability density functions corresponding to the waterline position points, then obtaining barrier distribution weights of each pixel point, and then only obtaining the pixel points of which the classification results belong to barriers in the water area, and using the barrier distribution weights to calculate barrier distribution weighted losses.
In some embodiments, for any pixel point in the non-water area of the tag image, the classification result belongs to the obstacle, and the corresponding obstacle distribution weight is also obtained. And then obtaining the obstacle distribution weighting loss at least according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the non-water area of the tag image, and the prediction probability of each pixel point belonging to each classification result in the water area sample image.
In this embodiment, for a water area and a non-water area of a tag image, the obstacle distribution weights of any pixel in the water area, where the classification result belongs to an obstacle, are respectively obtained, where the obstacle distribution weights of any pixel in the water area are related to the distance from the pixel to the corresponding waterline position point, so as to improve the detection accuracy of a small obstacle or a fuzzy target, and the obstacle distribution weights of any pixel in the non-water area may be a constant value, or may be a value related to the waterline position point corresponding to the pixel, so as to improve the detection accuracy of an obstacle above a waterline.
In some embodiments, at any pixel point p is calculated i (x i ,y i ) Corresponding to the waterline position point
Figure BDA0003841480620000101
Distance d between i When calculated by the following formula:
Figure BDA0003841480620000102
as can be seen from the above formula, when any pixel is above or on the waterline, the distance d between the pixel and the corresponding waterline position point i All are 0, the corresponding probability at this timeThe density function reaches the maximum value, and when any pixel point is below the waterline, the distance d between the pixel point and the corresponding waterline position point i The row coordinate difference value between the two is larger as the position point of the waterline is closer, and the corresponding probability density function is larger, so that the obstacle close to the waterline and above the waterline is more easily detected.
In some embodiments, when the obstacle distribution weight of each pixel point of the obstacle is calculated according to the classification result of each pixel point in the tag image, the classification result in the water area of the tag image, and the prediction probability of each pixel point in the water area sample image, the prediction probability of each pixel point in the water area sample image is calculated according to the following formula:
Figure BDA0003841480620000103
wherein y is iw ,y io And y is Labeling values of the ith pixel point in the label image on water, obstacle and sky, p iw ,p io And p is The prediction probability of the ith pixel point in the water area sample image of the image segmentation model on water, obstacle and sky is respectively, and w io Weights are distributed for the obstacles.
In some embodiments, the barrier distribution weights are:
Figure BDA0003841480620000111
as can be seen from the above formula for calculating the obstacle distribution weighted loss, when the ith pixel is an obstacle, y iw And y is 0, and y io 1, at this time, the ith pixel point belongs to the classification loss of the obstacle as the classification result, and the obstacle distribution weight is given, when the ith pixel point is closer to the corresponding waterline position point, the obstacle distribution weight is larger, so that more sensitive obstacle detection can be performed on the ith pixel point, and the detection precision of the small obstacle or the fuzzy target is improved.
Referring to fig. 4, the framework and principle of the image segmentation model are illustrated, and detailed description is given below.
In some embodiments, the image segmentation model includes a deep convolutional neural network layer, a detail head, and a context prior layer. The depth convolution neural network layer is used for extracting depth features from the input water area sample image and generating a detail feature map and a depth feature map. The detail head takes a detail characteristic diagram generated by the deep convolutional neural network layer as input, and learns the detail characteristics of the classification edge by using a detail loss guide model. The context prior layer takes a depth feature map generated by the depth convolution neural network layer as input, learns context relations among features by using an affinity loss guide model, and generates a context relation map. The image segmentation model is specifically described below:
in some embodiments, the framework adopted in the design of the deep neural network layer may be a light Mobi leNet v2 model, which may be used to ensure real-time performance, and it is understood that the framework is not limited to the Mobi leNet v2 model, and other model frameworks for deep feature extraction are also applicable to the method proposed in the present embodiment.
Referring to fig. 4 again, in some embodiments, the deep neural network layer proposed in the present embodiment includes a two-dimensional convolution and bottleneck layer 1-bottleneck layer 7. Wherein, when downsampling is performed on the water area sample image, the downsampling operation is arranged at the forefront of the model, for example, 3 times of downsampling operations, namely 8 times of downsampling, can be continuously performed in the initial stage of the network, so that the detail characteristic loss caused by more downsampling operations in the subsequent bottleneck layer is reduced, and a large enough receptive field is provided for capturing the context characteristics in and among the classes for the context prior layer. In some embodiments, however, the sampling rate and the position at which the downsampling is performed are not limited thereto, for example, 16 times or 32 times downsampling, and the downsampling position may be provided at the bottleneck layer 5 or the bottleneck layer 7.
Referring to fig. 5, in some embodiments, the context prior layer inputs the depth feature map generated by the depth convolutional neural network layer, and generates the context prior feature map after a set of completely separable convolutions and 1×1 convolutions. According toAnd the context prior feature map and the depth feature map are subjected to completely separable convolution to obtain an intra-class context feature map and an inter-class context feature map. And carrying out cascading on channels by the intra-class context feature map, the inter-class context feature map and the depth feature map to obtain the output of a context priori layer, namely a context association feature map, wherein the context association feature map is the association feature map. The context prior feature map directly reflects similarity and inter-class variability of features, and as the final output of the context prior layer, the context associated feature map acts on the original feature map. The context correlation feature map is divided into intra-class context correlation map P intra And inter-class context correlation map P inter The calculation formulas are as follows:
P intra =PX
P inter =(1-P)X
wherein P is a context priori feature map, X is an intermediate feature map obtained by completely separable convolution of an original depth feature map in a context priori layer, and the dimension adjustment operation is performed before matrix multiplication is performed. The intra-class context feature map and the inter-class context feature map are subjected to cascading on a channel, so that a final context correlation feature map can be obtained.
Referring to fig. 6, the context prior feature map is supervised by an ideal affinity map calculated from the tag image, and the ideal affinity map is calculated as follows:
step A: and downsampling the label image to obtain a binary matrix G, wherein the size of the binary matrix G is the same as the size (h multiplied by w) of the context prior feature map, and h and w are respectively high and wide.
And (B) step (B): and performing single-heat encoding on three classification results of water (0), obstacle (1) and sky (2) according to the tag image, and splicing the three classification results into a matrix M, wherein the size is n multiplied by 3, and n=h multiplied by w. The three columns in M correspond to the binary labels of water, obstacle and sky, respectively.
Step C: the ideal affinity map a is calculated by the following formula:
A=MM T
it follows that each pixel a in the ideal affinity map ij ∈A,i∈[1,n],j∈[1,n]All reflect the ith element G in the binary matrix G i And the j-th element g j Is the relation of (1), namely:
Figure BDA0003841480620000121
from the above, it follows that the context prior feature map P, which is supervised by the ideal affinity map, necessarily also contains a large amount of context information about similarity and inter-class variability. In particular, ideal affinity loss for supervision
Figure BDA0003841480620000122
The calculation formula of (2) is as follows:
Figure BDA0003841480620000123
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003841480620000124
and->
Figure BDA0003841480620000125
Representing the univariate and global terms in the ideal affinity penalty. Wherein one item- >
Figure BDA0003841480620000126
Is the binary cross entropy loss of the context prior feature map and the affinity map, and is obtained by the following formula:
Figure BDA0003841480620000127
wherein p is k ∈P,a k E A represents the context prior feature map P and the kth (k E [1, n) in the ideal affinity map A, respectively 2 ]) The elements.
Global item
Figure BDA0003841480620000131
Then a set of binary cross entropy losses including intra-class accuracy, intra-class recall and inter-class recall is represented and obtained by the following formula:
Figure BDA0003841480620000132
wherein p is ij (i∈[1,n],j∈[1,n]),a ij (i∈[1,n],j∈[1,n]) The values of row i, column j in the context prior feature map and ideal affinity map, respectively.
Referring to fig. 7, in some embodiments, before performing detail loss calculation, laplace convolution with different steps is performed on the label image to obtain 4 soft edge graphs, where the laplace convolution kernel is:
Figure BDA0003841480620000133
from this, the laplace convolution can play a role of edge enhancement, and the steps of the laplace convolution corresponding to the 4 soft edge maps are 1,2,4, and 8, respectively.
And then up-sampling the soft edge graphs with the Laplace convolution steps of 1,2 and 4 by 8 times, 4 times and 2 times respectively, and merging the images after the sizes of the 4 soft edge graphs are adjusted to be consistent so as to generate a final edge graph. In this embodiment, the steps are respectively 1,2,4 and 8, and especially the step is 8, so that on one hand, the scale consistency of the detail feature map and the edge map is ensured, the model is also helped to learn the edge information of a large receptive field, the performance of contour perception is improved, the consistency of the features learned by the segmentation model can be ensured through the large receptive field and the step corresponding to the depth feature map, and the edge map of the large receptive field is favorable for the model to learn the complete contour information instead of the simple edge details.
In some embodiments, edge graphs are used for supervisionThe detail map generated by the detail head inputs the detail feature map generated by the deep convolutional neural network layer, and the detail feature map is up-sampled by 8 times after being subjected to a 3×3 convolution, a batch normalization layer, a Relu activation layer and a 1×1 convolution, so that the detail map consistent with the edge map in size is generated. Loss of detail
Figure BDA0003841480620000134
The calculation is performed by calculating the binary cross entropy loss of the edge map and the detail map, and is obtained by the following formula:
Figure BDA0003841480620000135
wherein e k ,d k Representing corresponding pixel values in the edge map and detail map, k E [1, m]M is the total number of pixels of the edge map.
In the above embodiment, the obstacle distribution weighting loss is calculated respectively
Figure BDA0003841480620000136
Detail loss->
Figure BDA0003841480620000137
And affinity loss->
Figure BDA0003841480620000141
Wherein the barrier distribution weighted loss corresponds to the final segmentation result, the detail loss is used for supervision of the detail feature map, and the ideal affinity loss is used for supervision of the intermediate output context prior map of the context prior layer. Since the loss function does not have any effect on model inference speed, more reasonable loss functions can be used to improve the performance of the image segmentation model. In the present embodiment, the loss function of the image segmentation model is composed of three parts, respectively, for weighting loss for obstacle distribution >
Figure BDA0003841480620000142
Detail loss->
Figure BDA0003841480620000143
And affinity loss->
Figure BDA0003841480620000144
And the final loss function->
Figure BDA0003841480620000145
Is the sum of the three loss functions:
Figure BDA0003841480620000146
then according to the loss function
Figure BDA0003841480620000147
And adjusting parameters of the image segmentation model until the image segmentation model converges to obtain a trained image segmentation model. And in the segmentation result prediction stage, the detail feature map, the depth feature map and the context associated feature map are subjected to cascading on a channel, then bilinear interpolation is used for up-sampling and predicting pixel point types, and finally a segmentation result consistent with the water area sample image size is generated.
Compared with other ocean scene segmentation algorithms, the method and the device for the model segmentation of the unmanned ship have the advantages that three effective loss functions are designed on the basis of a lightweight model frame aiming at the special problems of ocean or lake environment characteristics and water traffic, the segmentation accuracy of the model is improved on the premise of ensuring real-time performance, and timely and accurate environment information is provided for navigation and obstacle avoidance of the unmanned ship. Wherein the barrier distribution weighting penalty is used to guide barrier detection, the detail penalty is used to optimize contour details, and the ideal affinity penalty is used to learn varying texture features.
According to the embodiment, the obstacle distribution weighting loss is to dynamically adjust the obstacle distribution weight of each pixel point about the classification loss according to the distribution rule of the obstacle in the ocean or inland scene, so as to improve the detection accuracy of the obstacle, especially the detection of the small obstacle and the fuzzy obstacle at a distance.
As can be seen from the above embodiments, the lightweight model designed in this embodiment does not utilize the decoder to perform detail optimization, but designs a detail loss according to the edge features of the ground truth value (labeling value) in the label image, and is used to guide the model to learn the detail information of each classification boundary, so as to ensure more accurate detail segmentation, and avoid the extra calculation overhead caused by using the decoder.
As can be seen from the above embodiments, in the process of generating the edge map in detail loss, laplacian convolution with steps 1,2,4 and 8 is designed, so that the problem that the detail perception performance may be impaired due to inconsistent dimensions of the detail feature map and the edge map is avoided. On the one hand, the Laplace convolution with the stride of 8 ensures the scale consistency of the detail characteristic diagram and the edge diagram, is also beneficial to the model to learn the edge information of a large receptive field, and improves the performance of contour perception.
In some embodiments, a method of water image segmentation is provided, comprising:
and 1) acquiring an image of the water area to be segmented.
And 2) segmenting the water area image to be segmented based on the image segmentation model trained by the method in any one of the embodiments to obtain a segmented water area image.
Some embodiments provide a computer readable storage medium having a program stored thereon, the program being executable by a processor to implement the method described in the above embodiments.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (10)

1. The training method of the water area image segmentation model is characterized by comprising the following steps of:
acquiring a water area sample image;
acquiring a label image corresponding to the water area sample image, wherein the label image is an image of each pixel point on the water area sample image marked with a classification result, and the classification result at least comprises water and barriers;
inputting the water area sample image into an image segmentation model, and obtaining the prediction probability that each pixel point in the water area sample image belongs to each classification result;
obtaining a classification result of each pixel point in the label image;
for any pixel point of the classification result in the water area of the tag image, which belongs to the obstacle, acquiring a corresponding waterline position point, and acquiring an obstacle distribution weight of the pixel point according to the distance between the any pixel point and the waterline position point corresponding to the pixel point;
obtaining barrier distribution weighted loss at least according to classification results of all pixel points in the label image, barrier distribution weights of all pixel points of the barrier to which the classification results belong in a water area of the label image, and prediction probabilities of all pixel points of the water area sample image to which the classification results belong;
And adjusting parameters of the image segmentation model at least according to the barrier distribution weighted loss until the image segmentation model converges to obtain a trained image segmentation model.
2. The method for training a segmentation model for a water area image according to claim 1, wherein the step of obtaining the corresponding waterline position point for any pixel point of the classification result belonging to the obstacle in the water area of the tag image comprises the steps of:
for any pixel point of the classification result in the water area of the tag image, which belongs to an obstacle, acquiring a column of pixels where the pixel point is located;
and obtaining pixel points which are not in water in the classification result in the row of pixels from top to bottom until the pixel points which are in water are obtained, taking the pixel points which are in water as waterline position points corresponding to any pixel point, and positioning the corresponding waterline position points above any pixel point.
3. The method for training a segmentation model of a water image according to claim 1, wherein the obtaining the barrier distribution weight of the pixel according to the distance between the arbitrary pixel and the corresponding waterline position point comprises:
acquiring a corresponding probability density function based on the position point of the corresponding waterline;
Calculating the value of the probability density function corresponding to any pixel point according to the distance between the any pixel point and the corresponding waterline position point;
and distributing the weight of the barrier according to the value of the probability density function.
4. A method of training a segmentation model for a water image according to claim 3, wherein the probability density function is gaussian distributed, and the value of the probability density function corresponding to any pixel is calculated by the following formula:
Figure FDA0003841480610000021
Figure FDA0003841480610000022
wherein p is i (x i ,y i ) D, coordinates of any pixel point of the classification result belonging to the obstacle in the water area of the label image i For any pixel point p i (x i ,y i ) Corresponding to the waterline position point
Figure FDA0003841480610000023
The distance between them, sigma is standard deviation, +.>
Figure FDA0003841480610000024
Is the mean value, y all Is the total number of rows of pixels in the water sample image.
5. The method for training a segmentation model for a water image as set forth in claim 4, wherein the standard deviation σ is:
Figure FDA0003841480610000025
6. a method of training a segmentation model for a water image as set forth in claim 1, further comprising:
obtaining corresponding obstacle distribution weights of any pixel points of the classification result belonging to the obstacle in the non-water area of the tag image;
And obtaining the obstacle distribution weighted loss at least according to the classification result of each pixel point in the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the water area of the tag image, the obstacle distribution weight of each pixel point of the obstacle belonging to the classification result in the non-water area of the tag image, and the prediction probability of each pixel point belonging to each classification result in the water area sample image.
7. A method of training a water image segmentation model according to any one of claims 1 to 6, wherein the image segmentation model comprises a convolutional neural network and a contextual prior layer, the method further comprising:
performing downsampling treatment on the water area sample image through the convolutional neural network layer to obtain a depth feature map of the water area sample image;
performing contextual feature extraction on the depth feature map of the water area sample image through the contextual priori layer to obtain a priori feature map of the water area sample image;
acquiring an ideal affinity graph according to the label image;
determining an affinity loss from the prior signature and the ideal affinity map;
and adjusting parameters of the image segmentation model according to the barrier distribution weighted loss and the affinity loss.
8. A method of training a water image segmentation model according to any one of claims 1 to 6, wherein the image segmentation model comprises a convolutional neural network and a detail head, the method further comprising:
performing downsampling treatment on the water area sample image through the convolutional neural network layer to obtain a detail feature map of the water area sample image;
performing Laplace convolution with steps of 1,2,4 and 8 on the label image to obtain 4 soft edge images, up-sampling part of the soft edge images, and merging the images of the 4 soft edge images to obtain an edge image;
carrying out detail extraction on the detail feature images of the water area sample images through the detail heads, and obtaining detail images with the same size as the edge images;
determining detail loss according to the edge graph and the detail graph;
and adjusting parameters of the image segmentation model according to the barrier distribution weighted loss and the detail loss.
9. A method of segmentation of a water image, comprising:
acquiring a water area image to be segmented;
the segmentation of the water area image to be segmented is performed based on an image segmentation model trained by the method according to any one of claims 1-8, so as to obtain a segmented water area image.
10. A computer readable storage medium, characterized in that the medium has stored thereon a program executable by a processor to implement the method of any of claims 1-9.
CN202211106216.7A 2022-09-09 2022-09-09 Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium Pending CN116310304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211106216.7A CN116310304A (en) 2022-09-09 2022-09-09 Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211106216.7A CN116310304A (en) 2022-09-09 2022-09-09 Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium

Publications (1)

Publication Number Publication Date
CN116310304A true CN116310304A (en) 2023-06-23

Family

ID=86785667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211106216.7A Pending CN116310304A (en) 2022-09-09 2022-09-09 Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium

Country Status (1)

Country Link
CN (1) CN116310304A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853938A (en) * 2024-03-08 2024-04-09 鲸服科技有限公司 Ecological monitoring system and method based on image recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853938A (en) * 2024-03-08 2024-04-09 鲸服科技有限公司 Ecological monitoring system and method based on image recognition
CN117853938B (en) * 2024-03-08 2024-05-10 鲸服科技有限公司 Ecological monitoring system and method based on image recognition

Similar Documents

Publication Publication Date Title
CN111201451B (en) Method and device for detecting object in scene based on laser data and radar data of scene
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
CN106980871B (en) Low-fidelity classifier and high-fidelity classifier applied to road scene images
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
Kanagaraj et al. Deep learning using computer vision in self driving cars for lane and traffic sign detection
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
CN111259827B (en) Automatic detection method and device for water surface floating objects for urban river supervision
CN113095152B (en) Regression-based lane line detection method and system
CN108764470B (en) Processing method for artificial neural network operation
Liu et al. Real-time monocular obstacle detection based on horizon line and saliency estimation for unmanned surface vehicles
Chan et al. Lane mark and drivable area detection using a novel instance segmentation scheme
CN116310304A (en) Water area image segmentation method, training method of segmentation model of water area image segmentation method and medium
CN108764465B (en) Processing device for neural network operation
Petković et al. An overview on horizon detection methods in maritime video surveillance
Muril et al. A review on deep learning and nondeep learning approach for lane detection system
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
CN116630920A (en) Improved lane line type identification method of YOLOv5s network model
CN116310681A (en) Unmanned vehicle passable area prediction method and system based on multi-frame point cloud fusion
CN108647781B (en) Artificial intelligence chip processing apparatus
Rana et al. Partially Visible Lane Detection with Hierarchical Supervision Approach
CN114359493B (en) Method and system for generating three-dimensional semantic map for unmanned ship
Yang et al. A novel vision-based framework for real-time lane detection and tracking
US10373004B1 (en) Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image
Li et al. A fast detection method for polynomial fitting lane with self-attention module added
CN110895680A (en) Unmanned ship water surface target detection method based on regional suggestion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination