CN112488113A - Remote sensing image rotating ship target detection method based on local straight line matching - Google Patents

Remote sensing image rotating ship target detection method based on local straight line matching Download PDF

Info

Publication number
CN112488113A
CN112488113A CN202011230682.7A CN202011230682A CN112488113A CN 112488113 A CN112488113 A CN 112488113A CN 202011230682 A CN202011230682 A CN 202011230682A CN 112488113 A CN112488113 A CN 112488113A
Authority
CN
China
Prior art keywords
target
sub
targets
width
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011230682.7A
Other languages
Chinese (zh)
Inventor
陈华杰
吕丹妮
白浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011230682.7A priority Critical patent/CN112488113A/en
Publication of CN112488113A publication Critical patent/CN112488113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image rotating ship target detection method based on local straight line matching. The method increases labels on the head and the tail of the ship and the ship body when the object is subjected to sub-region intensive segmentation, performs local sub-region segmentation on the center point of the body sub-object obtained on the test picture by using hierarchical clustering, reduces interference, performs straight line detection on the local region obtained by segmentation by using Hough transform, finally fits the points subjected to straight line detection into a line segment and matches the line segment with the headtail sub-object data, eliminates false alarms, and completes the detection of special objects with angles and the like. The method can be used for rotating ship targets, and effectively reduces the false alarm rate of detection results.

Description

Remote sensing image rotating ship target detection method based on local straight line matching
Technical Field
The invention belongs to the field of deep learning, and particularly relates to a remote sensing image rotating ship target detection method based on local straight line matching.
Background
At present, target detection is widely applied to the fields of military, civil use and the like. The deep convolutional neural network can utilize a target data set to carry out autonomous learning on a target to be detected and improve a model of the deep convolutional neural network. YOLO V5 is a single-step target detection algorithm, which does not need to use the RPN to extract candidate target information, but directly merges the two tasks of extracting candidate areas and classifying into one network, and generates the position and category information of the target through the network, and is an end-to-end target detection algorithm. Therefore, the single-step target detection algorithm has a faster detection speed.
The YOLO V5 model is used for target detection by adopting a method of directly regressing target coordinates and classifying targets in a grid, mainly utilizes a horizontal rectangular bounding box to define the position of a target, and positions the target through regression of parameters of the bounding box. The method is accurate enough for the result when the target object to be positioned is a small target such as a human, a small animal and the like, but is not suitable for special rotating targets such as ships, vehicles, roads and the like with angles or radians and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a remote sensing image rotating ship target detection method based on local straight line matching, the method divides ship targets into two categories of head and tail headtails and body, dense sub-region cutting is respectively carried out on the headtails and body of the targets in a training set to obtain dense sub-targets, and then a YOLO V5 algorithm is used for training the training set. In a test set, firstly, a trained YOLO V5 model is used for acquiring the center positions, length and width information, confidence coefficients and category information of all sub-targets, and a non-maximum suppression algorithm is used for filtering a plurality of detection information surrounding the same grid point; then, local sub-area division is carried out by utilizing hierarchical clustering according to the center points of the body sub-targets, and interference is removed; then, carrying out straight line detection and fitting candidate line segments of body sub-targets by using Hough transform in a local sub-region; and finally, matching the candidate line segments with the corresponding headtail sub-target data, eliminating false alarms, and completing target detection of the image to be detected.
Step (1), training set data preprocessing
And marking the target to be detected in the training set image by using an image marking tool. First, the coordinates (x, y) of the center point of the target in the image, the width w and the height h of the target, and the angle information angle of the target are obtained. Then, according to the set cutting step length and the height h of the target, determining the number n of sub-targets cut by the target:
n=h/step+1
calculating the length h _ vec and width w _ vec of the sub-target:
h_vec=[h*cos(angle)/2,h*sin(angle)/2]
w_vec=[w/2cos(3π/2+angle),w/2sin(3π/2+angle)]
calculating the coordinates (x) of four top points of the target, namely the upper left vertex, the upper right vertex, the lower right vertex and the lower left vertex according to the coordinates (x, y) of the central point of the target, the width w and the height h of the target1,y1),(x2,y2),(x3,y3),(x4,y4):
Figure BDA0002765095510000021
Then, the distance between the center points of the adjacent sub-targets is obtained according to the number n of the sub-targets (d)cx,dcy):
dcx=(x3-x1)/n
dcy=(y3-y1)/n
Obtaining the coordinate (x) of the center point of the ith sub-target according to the interval size and the vertex position of the targeti,yi) Wherein 0 is<i<n:
xi=x1+dcx*(0.5+i)
yi=y1+dcy*(0.5+i)
And finally, marking an area with the length of 3/4 in the middle of the target as body according to the coordinates of the center point of the sub-target, the length h _ vec and the width w _ vec, in addition, the length 1/4 is evenly divided in the front area and the rear area of the target and marked as headtail, cutting the targets of the two categories by using the position information of the sub-target respectively, and obtaining dense sub-targets of the headtail and the body.
Step (2), training a YOLO V5 network
The preprocessed training set was trained using the YOLO V5 algorithm. And (3) clustering the labeling frames of the sub-targets by using a K-nearest neighbor clustering method according to the height and width of the labeling frames of the dense sub-targets, then dividing the labeling frames into 9 classes, and acquiring the sizes of anchor frames of the 9 classes so as to set anchor frame parameters of the YOLO V5 network. And extracting the central point information, height and width of the dense sub-targets, inputting the information into a YOLO V5 network for circular training until the loss function is not reduced any more, and acquiring the weight file at the moment.
Step (3) testing set target prediction
Setting a YOLO V5 network to test the images of the test set by using the weight file obtained by training in the step (2), and acquiring the prediction information of all the predicted headtails and body sub-targets, including the central point coordinate (x)c *,yc *) Length h of*Width w*Confidence conf and class information cls.
Step (4) filtering redundant detection information
And (4) according to the sub-target prediction information obtained in the step (3), filtering a plurality of anchor boxes surrounding the same sub-target by using non-maximum suppression, wherein each sub-target only retains the detection information with the highest confidence degree.
Step (5) local region division
And (4) carrying out local area division on the center points of the body sub-targets filtered in the step (4) by using a hierarchical clustering algorithm to remove interference.
The hierarchical clustering algorithm mainly takes each data point in the training sample set as a cluster; then calculating the distance between every two clusters, and merging the two clusters with the shortest distance or the most similar distance; and repeating the calculation and the combination until the obtained current cluster number is 10% of the cluster number before the combination, and finishing the division of the local area.
And (6) carrying out linear detection.
And (5) respectively carrying out Hough transformation on each divided local area in the step (5), and detecting and fitting candidate line segments of body sub-targets.
Step (7), matching the body sub-target candidate line segment and the headtail sub-target data
Setting a proper threshold according to the size of the detected target, and when the total number of the headtail sub-targets corresponding to the body sub-target candidate line segment is less than the threshold, considering the headtail sub-target as a false alarm and removing the false alarm; when it is greater than the threshold, the data is retained. Performing polynomial fitting on all the reserved predicted sub-target central points to obtain a functional relation of coordinates of the sub-target central points, and acquiring an angle of a fitting straight line by the function; taking the maximum (x) of the coordinates of the center points of all predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Average value of sum (x)c,yc) As coordinates of the center point of the prediction whole target:
xc=(xcmax *-xcmin *)/2
yc=(ycmax *-ycmin *)/2
then according to the maximum value (x) of the coordinates of the center points of all the predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Calculating the height h of the predicted target by using Pythagorean theoremc
Figure BDA0002765095510000041
According to the obtained width w of all the predicted sub-targets*Average value w ofmean *The target width w is obtained from the trigonometric function by using the angle of the fitted straight linec
wc=max(wmean *cos(angle)-wmean *sin(angle))
Thereby obtaining a center point (x) of the predicted targetc,yc) Height hcWidth wcAnd drawing a prediction frame in the prediction picture according to the angle information to finish the prediction of the ship.
The invention has the following beneficial effects: when the target to be detected is subjected to sub-region intensive segmentation, labels on a head and tail header and a body of a ship are added, local sub-region segmentation is performed on the center points of body sub-targets acquired on a test picture by using hierarchical clustering, interference is removed, straight line detection is performed on the local regions obtained by the segmentation by using Hough transform, finally, points subjected to the straight line detection are fitted into a line segment and matched with the header sub-target data, false alarms are eliminated, detection on special targets with angles and the like is completed, the false alarm rate is reduced, and the detection effectiveness is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention is further analyzed with reference to the following specific examples.
This embodiment divides a set of acquired ship target images into a training set and a test set. As shown in fig. 1, the specific steps of completing the target detection task by using the remote sensing image rotating ship target detection method based on local straight line matching are as follows:
step (1), training set data preprocessing
And marking the target to be detected in the training set image by using an image marking tool. First, the coordinates (x, y) of the center point of the target in the image, the width w and the height h of the target, and the angle information angle of the target are obtained. Then, according to the set cutting step length and the height h of the target, determining the number n of sub-targets cut by the target:
n=h/step+1
calculating the length h _ vec and width w _ vec of the sub-target:
h_vec=[h*cos(angle)/2,h*sin(angle)/2]
w_vec=[w/2cos(3π/2+angle),w/2sin(3π/2+angle)]
calculating the coordinates (x) of four top points of the target, namely the upper left vertex, the upper right vertex, the lower right vertex and the lower left vertex according to the coordinates (x, y) of the central point of the target, the width w and the height h of the target1,y1),(x2,y2),(x3,y3),(x4,y4):
Figure BDA0002765095510000051
Then, the distance between the center points of the adjacent sub-targets is obtained according to the number n of the sub-targets (d)cx,dcy):
dcx=(x3-x1)/n
dcy=(y3-y1)/n
Obtaining the coordinate (x) of the center point of the ith sub-target according to the interval size and the vertex position of the targeti,yi) Wherein 0 is<i<n
xi=x1+dcx*(0.5+i)
yi=y1+dcy*(0.5+i)
And finally, marking the area with the length of 3/4 in the middle of the target as body according to the coordinates of the center point of the sub-target, the length h _ vec and the width w _ vec of the sub-target, in addition, the length 1/4 is evenly divided into the front area and the rear area of the target and is marked as headtail, and respectively cutting the targets of the two categories by using the position information of the sub-target to obtain the dense sub-targets of the headtail and the body.
Step (2), training a YOLO V5 network
The preprocessed training set was trained using the YOLO V5 algorithm. And (3) clustering the labeling frames of the sub-targets by using a K-nearest neighbor clustering method according to the height and width of the labeling frames of the dense sub-targets, then dividing the labeling frames into 9 classes, and acquiring the sizes of anchor frames of the 9 classes so as to set anchor frame parameters of the YOLO V5 network. And extracting the central point information, height and width of the dense sub-targets, inputting the information into a YOLO V5 network for circular training until the loss function is not reduced any more, and acquiring the weight file at the moment.
Step (3) testing set target prediction
Setting a YOLO V5 network to test the images of the test set by using the weight file obtained by training in the step (2), and acquiring the prediction information of all the predicted headtails and body sub-targets, including the central point coordinate (x)c *,yc *) Length h of*Width w*Confidence conf and class information cls.
Step (4) filtering redundant detection information
And (4) according to the sub-target prediction information obtained in the step (3), filtering a plurality of anchor boxes surrounding the same sub-target by using non-maximum suppression, wherein each sub-target only retains the detection information with the highest confidence degree.
Step (5) local region division
And (4) carrying out local area division on the center points of the body sub-targets filtered in the step (4) by using a hierarchical clustering algorithm to remove interference.
The hierarchical clustering algorithm mainly takes each data point in the training sample set as a cluster; then calculating the distance between every two clusters, and merging the two clusters with the shortest distance or the most similar distance; and repeating the calculation and the combination until the obtained current cluster number is 10% of the cluster number before the combination, and finishing the division of the local area.
And (6) carrying out linear detection.
And (5) respectively carrying out Hough transformation on each divided local area in the step (5), and detecting and fitting candidate line segments of body sub-targets.
Step (7), matching the body sub-target candidate line segment and the headtail sub-target data
Setting a proper threshold according to the size of the detected target, and when the total number of the headtail sub-targets corresponding to the body sub-target candidate line segment is less than the threshold, considering the headtail sub-target as a false alarm and removing the false alarm; when it is largeAt the threshold, the data is retained. Performing polynomial fitting on all the reserved predicted sub-target central points to obtain a functional relation of coordinates of the sub-target central points, and acquiring an angle of a fitting straight line by the function; taking the maximum (x) of the coordinates of the center points of all predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Average value of sum (x)c,yc) As coordinates of the center point of the prediction whole target:
xc=(xcmax *-xcmin *)/2
yc=(ycmax *-ycmin *)/2
then according to the maximum value (x) of the coordinates of the center points of all the predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Calculating the height h of the predicted target by using Pythagorean theoremc
Figure BDA0002765095510000061
According to the obtained width w of all the predicted sub-targets*Average value w ofmean *The target width w is obtained from the trigonometric function by using the angle of the fitted straight linec
wc=max(wmean *cos(angle)-wmean *sin(angle))
Thereby obtaining a center point (x) of the predicted targetc,yc) Height hcWidth wcAnd drawing a prediction frame in the prediction picture according to the angle information to finish the prediction of the ship.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above embodiments, and all embodiments are within the scope of the present invention as long as the requirements of the present invention are met.

Claims (1)

1. A remote sensing image rotating ship target detection method based on local straight line matching is characterized by comprising the following steps: the method specifically comprises the following steps:
step (1), training set data preprocessing
Marking the target to be detected in the training set image by using an image marking tool; firstly, acquiring coordinates (x, y) of a central point of a target in an image, width w and height h of the target and angle information angle of the target; then, according to the set cutting step length and the height h of the target, determining the number n of sub-targets cut by the target:
n=h/step+1
calculating the length h _ vec and width w _ vec of the sub-target:
h_vec=[h*cos(angle)/2,h*sin(angle)/2]
w_vec=[w/2cos(3π/2+angle),w/2sin(3π/2+angle)]
calculating the coordinates (x) of four top points of the target, namely the upper left vertex, the upper right vertex, the lower right vertex and the lower left vertex according to the coordinates (x, y) of the central point of the target, the width w and the height h of the target1,y1),(x2,y2),(x3,y3),(x4,y4):
Figure FDA0002765095500000011
Then, the distance between the center points of the adjacent sub-targets is obtained according to the number n of the sub-targets (d)cx,dcy):
dcx=(x3-x1)/n
dcy=(y3-y1)/n
Obtaining the coordinate (x) of the center point of the ith sub-target according to the interval size and the vertex position of the targeti,yi) Wherein 0 is<i<n:
xi=x1+dcx*(0.5+i)
yi=y1+dcy*(0.5+i)
Finally, according to the coordinates of the center point of the sub-target, the length h _ vec and the width w _ vec of the sub-target, marking the area with the length of 3/4 in the middle of the target as body, in addition, the length 1/4 is evenly divided into the front area and the rear area of the target and marked as headtail, respectively cutting the targets of the two categories by using the position information of the sub-target, and obtaining the dense sub-targets of the headtail and the body;
step (2), training a YOLO V5 network
Training the preprocessed training set by using a YOLO V5 algorithm; clustering the labeling frames of the sub-targets by using a K-nearest neighbor clustering method according to the height and width of the labeling frames of the dense sub-targets, then dividing the labeling frames into 9 classes, obtaining the sizes of anchor frames of the 9 classes, and setting anchor frame parameters of a YOLO V5 network; extracting the central point information, height and width of the dense sub-targets, inputting the central point information, height and width into a YOLO V5 network for circular training until the loss function is not reduced any more, and acquiring a weight file at the moment;
step (3) testing set target prediction
Setting a YOLO V5 network to test the images of the test set by using the weight file obtained by training in the step (2), and acquiring the prediction information of all the predicted headtails and body sub-targets, including the central point coordinate (x)c *,yc *) Length h of*Width w*Confidence conf and class information cls;
step (4) filtering redundant detection information
Filtering a plurality of anchor frames surrounding the same sub-targets by using non-maximum suppression according to the sub-target prediction information obtained in the step (3), wherein each sub-target only retains the detection information with the highest confidence;
step (5) local region division
Performing local area division on the center points of the body sub-targets filtered in the step (4) by using a hierarchical clustering algorithm to remove interference;
the hierarchical clustering algorithm mainly takes each data point in the training sample set as a cluster; then calculating the distance between every two clusters, and merging the two clusters with the shortest distance or the most similar distance; repeating the calculation and the combination until the obtained current cluster number is 10% of the cluster number before the combination, and completing the division of the local area;
step (6), carrying out linear detection;
carrying out Hough transform on each local area divided in the step (5), and detecting and fitting candidate line segments of body sub-targets;
step (7), matching the body sub-target candidate line segment and the headtail sub-target data
Setting a proper threshold according to the size of the detected target, and when the total number of the headtail sub-targets corresponding to the body sub-target candidate line segment is less than the threshold, considering the headtail sub-target as a false alarm and removing the false alarm; when it is greater than the threshold, retaining the data; performing polynomial fitting on all the reserved predicted sub-target central points to obtain a functional relation of coordinates of the sub-target central points, and acquiring an angle of a fitting straight line by the function; taking the maximum (x) of the coordinates of the center points of all predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Average value of sum (x)c,yc) As coordinates of the center point of the prediction whole target:
xc=(xcmax *-xcmin *)/2
yc=(ycmax *-ycmin *)/2
then according to the maximum value (x) of the coordinates of the center points of all the predicted sub-targetscmax *,ycmax *) And minimum value (x)cmin *,ycmin *) Calculating the height h of the predicted target by using Pythagorean theoremc
Figure FDA0002765095500000031
According to the obtained width w of all the predicted sub-targets*Average value w ofmean *The target width w is obtained from the trigonometric function by using the angle of the fitted straight linec
wc=max(wmean *cos(angle)-wmean *sin(angle))
Thereby obtaining a center point (x) of the predicted targetc,yc) Height hcWidth wcAnd drawing a prediction frame in the prediction picture according to the angle information to finish the prediction of the ship.
CN202011230682.7A 2020-11-06 2020-11-06 Remote sensing image rotating ship target detection method based on local straight line matching Pending CN112488113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011230682.7A CN112488113A (en) 2020-11-06 2020-11-06 Remote sensing image rotating ship target detection method based on local straight line matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011230682.7A CN112488113A (en) 2020-11-06 2020-11-06 Remote sensing image rotating ship target detection method based on local straight line matching

Publications (1)

Publication Number Publication Date
CN112488113A true CN112488113A (en) 2021-03-12

Family

ID=74928482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011230682.7A Pending CN112488113A (en) 2020-11-06 2020-11-06 Remote sensing image rotating ship target detection method based on local straight line matching

Country Status (1)

Country Link
CN (1) CN112488113A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966631A (en) * 2021-03-19 2021-06-15 浪潮云信息技术股份公司 License plate detection and identification system and method under unlimited security scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080304699A1 (en) * 2006-12-08 2008-12-11 Kabushiki Kaisha Toshiba Face feature point detection apparatus and method of the same
CN110674698A (en) * 2019-08-30 2020-01-10 杭州电子科技大学 Remote sensing image rotating ship target detection method based on intensive subregion cutting
CN110674674A (en) * 2019-08-01 2020-01-10 杭州电子科技大学 Rotary target detection method based on YOLO V3

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080304699A1 (en) * 2006-12-08 2008-12-11 Kabushiki Kaisha Toshiba Face feature point detection apparatus and method of the same
CN110674674A (en) * 2019-08-01 2020-01-10 杭州电子科技大学 Rotary target detection method based on YOLO V3
CN110674698A (en) * 2019-08-30 2020-01-10 杭州电子科技大学 Remote sensing image rotating ship target detection method based on intensive subregion cutting

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HAIMA1998: "深入浅出Yolo系列之Yolov5核心基础知识完整讲解", pages 1 - 10, Retrieved from the Internet <URL:《https://blog.csdn.net/haima1998/article/details/108382034》> *
刘树春等: "《深度实践OCR 基于深度学习的文字识别》", 31 May 2020, 机械工业出版社, pages: 35 - 39 *
李竹林等: "《图像立体匹配技术及其发展和应用》", 31 July 2007, 陕西科学技术出版社, pages: 141 - 144 *
杨露菁等: "《智能图像处理及应用》", 31 March 2019, 中国铁道出版社, pages: 104 - 112 *
胡卫东等: "《雷达目标识别理论》", 31 December 2017, 国防工业出版社, pages: 53 - 59 *
计明军等: "《预测与决策方法》", 31 August 2018, 大连海事大学出版社, pages: 178 - 192 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966631A (en) * 2021-03-19 2021-06-15 浪潮云信息技术股份公司 License plate detection and identification system and method under unlimited security scene

Similar Documents

Publication Publication Date Title
Chen et al. High-resolution vehicle trajectory extraction and denoising from aerial videos
CN110781827B (en) Road edge detection system and method based on laser radar and fan-shaped space division
KR102109941B1 (en) Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera
CN107330925B (en) Multi-obstacle detection and tracking method based on laser radar depth image
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
CN109001757B (en) Parking space intelligent detection method based on 2D laser radar
Gomez et al. Traffic lights detection and state estimation using hidden markov models
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
Wang et al. A vision-based road edge detection algorithm
CN109657686A (en) Lane line generation method, device, equipment and storage medium
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
WO2015096507A1 (en) Method for recognizing and locating building using constraint of mountain contour region
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
JP6826023B2 (en) Target identification device, program and method for identifying a target from a point cloud
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
Zhang et al. Fast moving pedestrian detection based on motion segmentation and new motion features
Seo et al. Utilizing instantaneous driving direction for enhancing lane-marking detection
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN114463362A (en) Three-dimensional collision avoidance sonar obstacle detection method and system based on deep learning
CN110674698B (en) Remote sensing image rotating ship target detection method based on intensive subregion cutting
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN112488113A (en) Remote sensing image rotating ship target detection method based on local straight line matching
Vajak et al. A rethinking of real-time computer vision-based lane detection
Jeong et al. Efficient lidar-based in-water obstacle detection and segmentation by autonomous surface vehicles in aquatic environments
CN107977608B (en) Method for extracting road area of highway video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination