CN114863258A - Method for detecting small target based on visual angle conversion in sea-sky-line scene - Google Patents

Method for detecting small target based on visual angle conversion in sea-sky-line scene Download PDF

Info

Publication number
CN114863258A
CN114863258A CN202210786036.1A CN202210786036A CN114863258A CN 114863258 A CN114863258 A CN 114863258A CN 202210786036 A CN202210786036 A CN 202210786036A CN 114863258 A CN114863258 A CN 114863258A
Authority
CN
China
Prior art keywords
sea
coordinate information
image
sky
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210786036.1A
Other languages
Chinese (zh)
Other versions
CN114863258B (en
Inventor
李非桃
冉欢欢
李和伦
陈益
王丹
褚俊波
陈春
李毅捷
赵瑞欣
莫桥波
王逸凡
李东晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Original Assignee
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Desheng Xinda Brain Intelligence Technology Co ltd filed Critical Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority to CN202210786036.1A priority Critical patent/CN114863258B/en
Publication of CN114863258A publication Critical patent/CN114863258A/en
Application granted granted Critical
Publication of CN114863258B publication Critical patent/CN114863258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a small target based on visual angle conversion in a sea-sky-line scene, which comprises the following steps: acquiring an image to be detected; identifying a sea-sky-line; framing an effective rectangular area of an image to be detected; dividing the effective rectangular area into N 2 Each image block, wherein an overlapping area exists between every two adjacent image blocks; will N 2 Arranging the image blocks according to N rows and N columns to obtain a recombined image; detecting sea surface targets in the recombined image by using a deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set; respectively converting the first coordinate information of each sea surface target into the first coordinate information of the sea surface target to be detectedSecond coordinate information in the image, and forming a second set of coordinate information. The invention improves the detection precision of small target ships and small target buoys near the sea-sky-line through the selection of the effective area and the recombination of the image blocks.

Description

Method for detecting small target based on visual angle conversion in sea-sky-line scene
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a method for detecting a small target based on visual angle conversion in a sea-sky-line scene.
Background
At present, the target detection technology plays more and more important roles in various fields and is gradually mature. The existing mature target detection technology is mostly based on a deep learning method. In general, a deep learning method scales input image data to a fixed size in an image preprocessing stage, and then detects a target in the scaled image data by loading a model, for example, applying a relatively wide YOLO algorithm model and SSD algorithm model.
In the field of sea surface target detection, small target ships, small target buoys and the like near sea antennas cause targets to be extremely small due to the long distance. At present, when a small target ship and a small target buoy near a sea antenna are detected, the following problems exist during the operation of a traditional deep learning network model:
1. in a long-distance sea-sky-line image input into the deep learning network model, the long-distance sea-sky-line image is a higher-resolution image, such as 5472 × 3648, however, the proportion of pixels of a small target near a sea-sky-line in the whole image is small, most of the pixels are regions such as ocean wave sky and the like which are not concerned by a user, and therefore most of operation time is applied to unrelated regions when a traditional algorithm is operated; 2. in an image preprocessing stage of a deep learning network model, high-resolution image data is compressed to a resolution of 640 × 640 pixels, even in order to compress the computational efficiency to a resolution of 416 × 416 pixels, small targets near a sea antenna become smaller, and meanwhile, more interference pixel points are introduced, so that detection failure is caused; 3. under a complex sea-sky-line scene, airplanes, flying birds and the like in the sky near the sea-sky-line cause large interference to the identification of small target ships, small target buoys and the like near the sea-sky-line, and the interference cannot be eliminated.
Disclosure of Invention
The invention aims to overcome one or more defects in the prior art and provides a method for detecting a small target based on view angle conversion in a sea-sky-line scene.
The purpose of the invention is realized by the following technical scheme:
the method for detecting the small target based on the view angle conversion in the sea-sky-line scene specifically comprises the following steps:
acquiring an image to be detected;
identifying a sea-sky line in an image to be detected;
selecting an effective rectangular area of the image to be detected according to the sea antenna frame;
transversely dividing the effective rectangular region into N 2 The image block comprises image blocks, wherein an overlapping area exists between two adjacent image blocks, the transverse widths of all the image blocks are the same, and the value of N is a positive integer greater than one;
will N 2 Arranging the image blocks according to N rows and N columns to obtain a recombined image of the image to be detected;
detecting sea surface targets in the recombined image by using a pre-constructed deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set, wherein the first coordinate information is the coordinate information of the sea surface target in the recombined image;
and respectively converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target, and combining the second coordinate information of each sea surface target into a second coordinate information set, wherein the second coordinate information is the coordinate information of the sea surface target in the image to be detected.
In a further improvement, after the step of respectively converting the first coordinate information of each sea surface target into the second coordinate information of the sea surface target and combining the second coordinate information of each sea surface target into the second coordinate information set, the method further includes the following steps:
and removing repeated sea surface target coordinate information in the second coordinate information set.
Further improved, the sea-sky line in the image to be detected is identified, specifically including:
calculating the vertical gradient of the image to be detected, and extracting to obtain edge features;
obtaining an edge straight line segment according to the edge characteristics;
screening the edge straight line segments according to a preset first threshold value to obtain target straight line segments;
aggregating the target straight line segments by adopting a preset clustering algorithm to obtain a sea-sky line segment set;
and fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
Further improved, the selecting an effective rectangular area of the image to be detected according to the sea-sky-line frame specifically includes:
identifying the intersection point of the sea-sky-line and the left boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 279104DEST_PATH_IMAGE001
Identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected,and the coordinates of the intersection point are recorded as
Figure 736630DEST_PATH_IMAGE002
Calculating the coordinates of the center point of the sea-sky-line according to the coordinates of the intersection point of the sea-sky-line and the left boundary line of the image to be detected and the coordinates of the intersection point of the sea-sky-line and the right boundary line of the image to be detected
Figure 347740DEST_PATH_IMAGE003
Wherein
Figure 232519DEST_PATH_IMAGE004
Figure 296290DEST_PATH_IMAGE005
Judgment equation
Figure 557507DEST_PATH_IMAGE006
Whether the result is true or not; if yes, determining a first distance parameter
Figure 147757DEST_PATH_IMAGE007
And a second distance parameter
Figure 203438DEST_PATH_IMAGE008
If not, determining a first distance parameter
Figure 20084DEST_PATH_IMAGE009
And a second distance parameter
Figure 553834DEST_PATH_IMAGE010
Wherein W is the width of the image to be detected;
the coordinates of the upper boundary line are calculated from the first distance parameter d1
Figure 139536DEST_PATH_IMAGE011
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 631697DEST_PATH_IMAGE012
And forming an effective rectangular area of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
In a further improvement, the first coordinate information of the sea surface object includes coordinate information of a center point of the sea surface object, height information of an object frame for framing the sea surface object, and width information of the object frame.
In a further improvement, the transverse widths of the overlapping regions are all first preset intervals, and the first preset intervals are positive integers.
In a further improvement, the calculating the vertical gradient of the image to be detected specifically includes:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 935639DEST_PATH_IMAGE013
Weighted value
Figure 7501DEST_PATH_IMAGE014
In a further improvement, the method further includes, after removing the repeated sea surface target coordinate information in the second coordinate information set, the following steps:
and removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected, wherein the interference target comprises an airplane and a bird in the sky.
In a further improvement, the removing the coordinate information of the interference target in the second coordinate information set specifically includes:
calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as the vertical coordinate
Figure 465287DEST_PATH_IMAGE015
Judgment of
Figure 862771DEST_PATH_IMAGE016
If the sea surface target is not the sea surface target, keeping the coordinate information of the sea surface target in a second coordinate information set, and if the sea surface target is not the sea surface target, removing the coordinate information of the sea surface target from the second coordinate information set;
wherein the content of the first and second substances,
Figure 513064DEST_PATH_IMAGE017
and m is the number of the remaining sea surface targets after the repeated sea surface target coordinate information in the second coordinate information set is removed.
The invention has the following beneficial effects:
1) the sea antenna in the image to be detected is fitted, an effective rectangular area of the image to be detected is selected based on the sea antenna, the effective rectangular area is transversely divided into a plurality of image blocks, meanwhile, in order to avoid cutting of a sea surface target, when the effective rectangular area is transversely divided, two adjacent image blocks are partially overlapped, all the image blocks are recombined, the recombined image is input into a pre-constructed deep learning network model to be subjected to target detection, and the detected coordinates of the sea surface target are converted into the coordinate position in the image to be detected.
According to the method, the interference of most invalid information is eliminated through the selection of the effective area, and meanwhile, compared with the original image to be detected, the resolution of the sea surface target input into the deep learning network model is increased by the recombined image, so that the accuracy of the detection of the sea surface target is improved, and the detection of small target ships, small target buoys and the like near the sea antenna under a complex sea antenna scene is completed.
2) And calculating the vertical gradient of the image to be detected through a kernel operator, and combining a least square method to realize the accurate simulation of the sea-sky-line.
3) And the precision of sea surface target detection is further improved by removing the repeated target and the interference target coordinate information in the second coordinate information set.
Drawings
FIG. 1 is a flow chart of a method for detecting small targets based on perspective transformation in a sea-sky-line scenario;
FIG. 2 is a schematic diagram of a division of an effective rectangular area;
fig. 3 is a schematic diagram of a recombined image obtained by recombining image blocks.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, the present embodiment provides a method for detecting a small target based on view angle conversion in a sea-sky-line scene, which is used for detecting a small sea-surface target such as a small ship and a small buoy near a sea-sky-line, and specifically includes the following steps:
and S1, acquiring an image to be detected.
And S2, identifying the sea-sky-line in the image to be detected. In a common embodiment, before the sea-sky-line in the image to be detected is identified, filtering and denoising processing is further performed on the image to be detected.
In the present embodiment, S2 includes the following sub-steps:
and a substep S21 of calculating the vertical direction gradient of the image to be detected and extracting to obtain edge characteristics.
And a substep S22 of obtaining an edge straight line segment according to the edge characteristics.
And a substep S23 of screening the edge straight line segment according to a preset first threshold value to obtain a target straight line segment.
And a substep S24, adopting a preset k-meas clustering algorithm to aggregate the target straight line segments to obtain a sea-sky line segment set.
And a substep S25 of fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
And S3, selecting an effective rectangular area of the image to be detected according to the sea antenna frame.
In the present embodiment, S3 includes the following sub-steps:
substep S31, identifying sea-sky-lineThe intersection point of the image and the left boundary line of the image to be detected, and the coordinate of the intersection point is recorded as
Figure 388616DEST_PATH_IMAGE018
Substep S32, identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 683331DEST_PATH_IMAGE019
Substep S33, calculating coordinates of center point of sea-sky-line according to coordinates of intersection point of sea-sky-line and left boundary line of image to be detected and coordinates of intersection point of sea-sky-line and right boundary line of image to be detected
Figure 517295DEST_PATH_IMAGE020
Wherein
Figure 264671DEST_PATH_IMAGE021
Figure 209493DEST_PATH_IMAGE022
Substep S34, determining equation
Figure 217769DEST_PATH_IMAGE023
Whether the result is true or not; if yes, determining a first distance parameter
Figure 222634DEST_PATH_IMAGE024
And a second distance parameter
Figure 457307DEST_PATH_IMAGE025
At this time, the first value mode of the first distance parameter d1 and the second distance parameter d2 is adopted; if not, determining a first distance parameter
Figure 674661DEST_PATH_IMAGE026
And a second distance parameter
Figure 209548DEST_PATH_IMAGE027
In this case, the first distance parameter d1 and the second distance parameter d2 have the second value. Wherein W is the width of the image to be detected. The step determines the value mode of the first distance parameter d1 and the second distance parameter d2 according to the calculation of the sea-sky-line inclination angle. When the sea-sky-line level or the inclination angle of the sea-sky-line is less than 10 degrees, the second value mode is adopted by the first distance parameter d1 and the second distance parameter d2, and when the inclination angle of the sea-sky-line is greater than or equal to 10 degrees, the first value mode is adopted by the first distance parameter d1 and the second distance parameter d 2. In the first value taking mode, the calculation coefficient 1.2 and the calculation coefficient 1.5 are preferable values based on experience, and can be adjusted according to specific situations, and meanwhile, the sea surface target close to the lens is relatively large, so that the distance parameter at the ocean side is set to be larger, the sea surface target far away from the lens side is relatively smaller, and the distance parameter at the sky side is set to be smaller. The calculation factor in the second distance parameter d2 is 1.5 and the calculation factor in the first distance parameter d1 is 1.2.
Substep S35, calculating coordinates of the upper boundary line based on the first distance parameter d1
Figure 250228DEST_PATH_IMAGE028
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 237776DEST_PATH_IMAGE012
Substep S36, forming an effective rectangular region of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
In addition, the coordinates of the left boundary line of the image to be detected
Figure 524401DEST_PATH_IMAGE029
Coordinates of the right boundary line of the image to be detected
Figure 382635DEST_PATH_IMAGE030
. In this embodiment, it is preferable
Figure 729303DEST_PATH_IMAGE031
Thus, therefore, it is
Figure 938567DEST_PATH_IMAGE032
S4, the effective rectangular area is divided into N by N image blocks, an overlap area having a lateral width of a first preset interval d3 exists between any two adjacent image blocks, and the lateral widths of all the image blocks are the same, where the value of N is a positive integer greater than one, and the value of the first preset interval d3 is a positive integer.
The calculation of the lateral width w1 of an image block is as follows:
calculating a first intermediate parameter
Figure 28883DEST_PATH_IMAGE033
N is a positive integer;
calculating the lateral width of an image block
Figure 272783DEST_PATH_IMAGE034
W1 is a positive integer, specifically: take the positive integer closest to the actual calculated value of w1 and greater than the actual value of w 1.
The calculation process of N in the transverse division of the effective rectangular region is as follows:
if it is
Figure 259193DEST_PATH_IMAGE035
If N is equal to 2; if it is
Figure 221333DEST_PATH_IMAGE036
If N is 3; if it is
Figure 708815DEST_PATH_IMAGE037
If so, then N takes the value of 4; if it is
Figure 72800DEST_PATH_IMAGE038
Then N takes the value 5.
And S5, arranging the N-by-N image blocks according to N rows and N columns to obtain a recombined image of the image to be detected.
S6, inputting the recombined images into a pre-constructed deep learning network model for target detection, identifying sea surface targets in the recombined images, obtaining first coordinate information of the sea surface targets, and combining the obtained first coordinate information of the sea surface targets into a first coordinate information set. The first coordinate information is coordinate information of the sea surface target in the recombined image. The deep learning network model is used for detecting sea surface targets in a common embodiment. The first coordinate information of the sea surface target comprises the coordinate information of the center point of the sea surface target, the height information of a target frame for framing the sea surface target and the width information of the target frame.
And S7, converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target respectively, and combining the second coordinate information of each sea surface target into a second coordinate information set. And the second coordinate information is the coordinate information of the sea surface target in the image to be detected. And the second coordinate information of the sea surface target comprises the coordinate information of the central point of the sea surface target in the image to be detected, the height information of a target frame for framing the sea surface target and the width information of the target frame.
Preferably, the following steps are also included after S7:
and S8, removing the repeated sea surface target coordinate information in the second coordinate information set. Because two adjacent image blocks have an overlapping area with the transverse width of the first preset interval d3, after the target detection is performed through the deep learning network model and then the coordinate conversion is performed, a plurality of sea surface targets with the same coordinate information may appear, and the precision of the sea surface target detection is improved by screening out the repeated sea surface target coordinate information.
Preferably, the vertical gradient of the image to be detected is calculated, specifically:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 230112DEST_PATH_IMAGE039
Weighted value
Figure 413969DEST_PATH_IMAGE014
. The larger the weight value k is, the larger the gradient value in the vertical direction is, and k takes a value of 1 in this embodiment.
Preferably, the following steps are also included after S8:
and S9, removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected. The interference target includes an airplane, a bird and the like in the sky near the sea-sky.
Removing the coordinate information of the interference target in the second coordinate information set, which specifically comprises the following substeps:
s91, calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as
Figure 846087DEST_PATH_IMAGE040
S92, judgment
Figure 527561DEST_PATH_IMAGE041
And if not, removing the coordinate information of the sea surface target from the second coordinate information set.
Wherein the content of the first and second substances,
Figure 386933DEST_PATH_IMAGE042
and m is the number of the remaining sea surface target coordinates after the repeated sea surface target coordinate information in the second coordinate information set is removed, namely the number of the remaining sea surface targets. And the accuracy of sea surface target detection is further improved by screening out the coordinate information of the interference target in the second coordinate information set.
The first preset interval d3 is preferably greater than half of the maximum width of the sea surface targets such as various small target vessels and small target buoys, based on the type of small target vessels and small target buoys to be detected.
With reference to fig. 2 and 3, the following describes a generation process of a reconstructed image (four image blocks are arranged in two rows and two columns in a rectangular manner) and a specific calculation process of converting each coordinate information in the first coordinate information set to a corresponding coordinate position in an image to be detected, by dividing the effective rectangular area into four parts by transverse division.
The generation process of the recombined image comprises the following steps:
four image blocks obtained by transversely dividing the effective rectangular area are sequentially defined as a first image block, a second image block, a third image block and a fourth image block from left to right;
placing a first image block in a first row and a first column of a rectangular arrangement, placing a second image block in a first row and a second column of the rectangular arrangement, placing a third image block in a second row and the first column of the rectangular arrangement, and placing a fourth image block in a second row and the second column of the rectangular arrangement to obtain a recombined image;
a specific calculation process for converting each piece of coordinate information in the first set of coordinate information to a corresponding coordinate position in the image to be detected:
converting the coordinate information of the sea surface target detected in the first image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 917140DEST_PATH_IMAGE043
converting the coordinate information of the sea surface target detected in the second image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 152949DEST_PATH_IMAGE044
Figure 225948DEST_PATH_IMAGE045
converting the coordinate information of the sea surface target detected in the third image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 990641DEST_PATH_IMAGE046
Figure 149090DEST_PATH_IMAGE047
converting the coordinate information of the sea surface target detected in the fourth image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 923011DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 850516DEST_PATH_IMAGE049
the coordinates of the sea surface object in the recombined image,
Figure 786111DEST_PATH_IMAGE050
for coordinates of sea surface objects detected by the reconstructed image in the image to be detected, and
Figure 431856DEST_PATH_IMAGE051
the method for detecting the small target ship in the sea-sky-line scene adopts three means of selecting an effective rectangular area, transversely dividing an image to be detected and recombining image blocks, belongs to a conversion realization of a machine visual angle in deep learning, saves the operation amount of a deep learning network model, increases the resolution ratio of a sea surface small target in image data input into the deep learning network model, greatly improves the detection precision of the small target ship and a small target buoy near the sea-sky-line, overcomes the defects of the existing target detection scheme mentioned in the background technology, and has a larger application prospect.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene is characterized by comprising the following steps of:
acquiring an image to be detected;
identifying a sea-sky line in an image to be detected;
selecting an effective rectangular area of the image to be detected according to the sea antenna frame;
transversely dividing the effective rectangular region into N 2 The image block comprises image blocks, wherein an overlapping area exists between two adjacent image blocks, the transverse widths of all the image blocks are the same, and the value of N is a positive integer greater than one;
will N 2 Arranging the image blocks according to N rows and N columns to obtain a recombined image of the image to be detected;
detecting sea surface targets in the recombined image by using a pre-constructed deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set, wherein the first coordinate information is the coordinate information of the sea surface target in the recombined image;
and respectively converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target, and combining the second coordinate information of each sea surface target into a second coordinate information set, wherein the second coordinate information is the coordinate information of the sea surface target in the image to be detected.
2. The method for detecting small targets based on view angle transformation in sea-sky-line scene as claimed in claim 1, wherein the step of converting the first coordinate information of each sea-surface target into the second coordinate information of the sea-surface target and combining the second coordinate information of each sea-surface target into the second coordinate information set further comprises the following steps:
and removing repeated sea surface target coordinate information in the second coordinate information set.
3. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 1, wherein the identifying the sea-sky-line in the image to be detected specifically comprises:
calculating the vertical gradient of the image to be detected, and extracting to obtain edge features;
obtaining an edge straight line segment according to the edge characteristics;
screening the edge straight line segments according to a preset first threshold value to obtain target straight line segments;
aggregating the target straight line segments by adopting a preset clustering algorithm to obtain a sea-sky line segment set;
and fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
4. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 2, wherein the selecting the effective rectangular area of the image to be detected according to the sea-sky-line frame specifically comprises:
identifying the intersection point of the sea-sky-line and the left boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 60497DEST_PATH_IMAGE001
Identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 39955DEST_PATH_IMAGE002
Calculating the coordinates of the center point of the sea-sky-line according to the coordinates of the intersection point of the sea-sky-line and the left boundary line of the image to be detected and the coordinates of the intersection point of the sea-sky-line and the right boundary line of the image to be detected
Figure 147588DEST_PATH_IMAGE003
Wherein
Figure 622692DEST_PATH_IMAGE004
Figure 499381DEST_PATH_IMAGE005
Judging equality
Figure 587423DEST_PATH_IMAGE006
Whether the result is true or not; if yes, determining a first distance parameter
Figure 182353DEST_PATH_IMAGE007
And a second distance parameter
Figure 443570DEST_PATH_IMAGE008
If not, determining a first distance parameter
Figure 502661DEST_PATH_IMAGE009
And a second distance parameter
Figure 823921DEST_PATH_IMAGE010
Wherein W is the width of the image to be detected;
the coordinates of the upper boundary line are calculated from the first distance parameter d1
Figure 843830DEST_PATH_IMAGE011
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 643159DEST_PATH_IMAGE012
And forming an effective rectangular area of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
5. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene according to claim 1, wherein the first coordinate information of the sea-surface target includes coordinate information of a center point of the sea-surface target, height information of a target frame for framing the sea-surface target, and width information of the target frame.
6. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 1, wherein the horizontal widths of the overlapping regions are all a first preset interval, and the first preset interval is a positive integer.
7. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 3, wherein the calculating the vertical gradient of the image to be detected specifically comprises:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 494440DEST_PATH_IMAGE013
Weighted value
Figure 314497DEST_PATH_IMAGE014
8. The method for detecting small targets based on view angle conversion in sea-sky-line scene as claimed in claim 4, wherein the step of removing the repeated sea-surface target coordinate information in the second coordinate information set further comprises the following steps:
and removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected, wherein the interference target comprises an airplane and a bird in the sky.
9. The method for detecting the small target based on the perspective transformation in the sea-sky-line scene according to claim 8, wherein the removing the coordinate information of the interfering target in the second coordinate information set specifically includes:
calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as the vertical coordinate
Figure 618440DEST_PATH_IMAGE015
Judgment of
Figure 221459DEST_PATH_IMAGE016
If the sea surface target is not the sea surface target, keeping the coordinate information of the sea surface target in a second coordinate information set, and if the sea surface target is not the sea surface target, removing the coordinate information of the sea surface target from the second coordinate information set;
wherein the content of the first and second substances,
Figure 927247DEST_PATH_IMAGE017
and m is the number of the remaining sea surface targets after the repeated sea surface target coordinate information in the second coordinate information set is removed.
CN202210786036.1A 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene Active CN114863258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210786036.1A CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210786036.1A CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Publications (2)

Publication Number Publication Date
CN114863258A true CN114863258A (en) 2022-08-05
CN114863258B CN114863258B (en) 2022-09-06

Family

ID=82625993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210786036.1A Active CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Country Status (1)

Country Link
CN (1) CN114863258B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049907A (en) * 2022-08-17 2022-09-13 四川迪晟新达类脑智能技术有限公司 FPGA-based YOLOV4 target detection network implementation method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679694A (en) * 2013-05-29 2014-03-26 哈尔滨工程大学 Ship small target detection method based on panoramic vision
CN104599273A (en) * 2015-01-22 2015-05-06 南京理工大学 Wavelet multi-scale crossover operation based sea-sky background infrared small target detection method
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system
CN111091024A (en) * 2018-10-23 2020-05-01 广州弘度信息科技有限公司 Small target filtering method and system based on video recognition result
CN111767856A (en) * 2020-06-29 2020-10-13 哈工程先进技术研究院(招远)有限公司 Infrared small target detection algorithm based on gray value statistical distribution model
CN112258518A (en) * 2020-10-09 2021-01-22 国家海洋局南海调查技术中心(国家海洋局南海浮标中心) Sea-sky-line extraction method and device
CN112669332A (en) * 2020-12-28 2021-04-16 大连海事大学 Method for judging sea and sky conditions and detecting infrared target based on bidirectional local maximum and peak local singularity
CN113223000A (en) * 2021-04-14 2021-08-06 江苏省基础地理信息中心 Comprehensive method for improving small target segmentation precision
US20210256680A1 (en) * 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target Detection Method, Training Method, Electronic Device, and Computer-Readable Medium
CN114494179A (en) * 2022-01-24 2022-05-13 深圳闪回科技有限公司 Mobile phone back damage point detection method and system based on image recognition

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679694A (en) * 2013-05-29 2014-03-26 哈尔滨工程大学 Ship small target detection method based on panoramic vision
CN104599273A (en) * 2015-01-22 2015-05-06 南京理工大学 Wavelet multi-scale crossover operation based sea-sky background infrared small target detection method
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN111091024A (en) * 2018-10-23 2020-05-01 广州弘度信息科技有限公司 Small target filtering method and system based on video recognition result
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system
US20210256680A1 (en) * 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target Detection Method, Training Method, Electronic Device, and Computer-Readable Medium
CN111767856A (en) * 2020-06-29 2020-10-13 哈工程先进技术研究院(招远)有限公司 Infrared small target detection algorithm based on gray value statistical distribution model
CN112258518A (en) * 2020-10-09 2021-01-22 国家海洋局南海调查技术中心(国家海洋局南海浮标中心) Sea-sky-line extraction method and device
CN112669332A (en) * 2020-12-28 2021-04-16 大连海事大学 Method for judging sea and sky conditions and detecting infrared target based on bidirectional local maximum and peak local singularity
CN113223000A (en) * 2021-04-14 2021-08-06 江苏省基础地理信息中心 Comprehensive method for improving small target segmentation precision
CN114494179A (en) * 2022-01-24 2022-05-13 深圳闪回科技有限公司 Mobile phone back damage point detection method and system based on image recognition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUNZHONG HOU等: "《Multiview Detection with Feature Perspective Transformation》", 《ECCV》 *
孙维亚 等: "《融合运动特征的高效视频火焰检测算法》", 《数据采集与处理》 *
徐海祥 等: "《水面图像目标检测的强语义特征提取结构》", 《华中科技大学学报(自然科学版)》 *
王啸雨: "《基于深度学习的水面无人艇船舶检测技术研究》", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049907A (en) * 2022-08-17 2022-09-13 四川迪晟新达类脑智能技术有限公司 FPGA-based YOLOV4 target detection network implementation method
CN115049907B (en) * 2022-08-17 2022-10-28 四川迪晟新达类脑智能技术有限公司 FPGA-based YOLOV4 target detection network implementation method

Also Published As

Publication number Publication date
CN114863258B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109934826B (en) Image feature segmentation method based on graph convolution network
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN107256557A (en) A kind of controllable subdivision curved surface image vector method of error
CN110084241B (en) Automatic ammeter reading method based on image recognition
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN111723464A (en) Typhoon elliptic wind field parametric simulation method based on remote sensing image characteristics
CN109087261A (en) Face antidote based on untethered acquisition scene
CN114863258B (en) Method for detecting small target based on visual angle conversion in sea-sky-line scene
CN108460833A (en) A kind of information platform building traditional architecture digital protection and reparation based on BIM
CN103839234A (en) Double-geometry nonlocal average image denoising method based on controlled nuclear
CN109389553B (en) Meteorological facsimile picture contour interpolation method based on T spline
CN115861409B (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
CN112906689A (en) Image detection method based on defect detection and segmentation depth convolution neural network
CN107944497A (en) Image block method for measuring similarity based on principal component analysis
CN109241981B (en) Feature detection method based on sparse coding
CN114742864A (en) Belt deviation detection method and device
CN102324043B (en) Image matching method based on DCT (Discrete Cosine Transformation) through feature description operator and optimization space quantization
CN114627461A (en) Method and system for high-precision identification of water gauge data based on artificial intelligence
CN113946978A (en) Underwater three-dimensional temperature and salinity parallel forecasting method based on LightGBM model
CN116721228B (en) Building elevation extraction method and system based on low-density point cloud
CN113344148A (en) Marine ship target identification method based on deep learning
CN113658144A (en) Method, device, equipment and medium for determining pavement disease geometric information
CN108734148A (en) A kind of public arena image information collecting unmanned aerial vehicle control system based on cloud computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant