CN114299399A - Aircraft target confirmation method based on skeleton line relation - Google Patents

Aircraft target confirmation method based on skeleton line relation Download PDF

Info

Publication number
CN114299399A
CN114299399A CN202111363657.0A CN202111363657A CN114299399A CN 114299399 A CN114299399 A CN 114299399A CN 202111363657 A CN202111363657 A CN 202111363657A CN 114299399 A CN114299399 A CN 114299399A
Authority
CN
China
Prior art keywords
point
points
pixel
white
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111363657.0A
Other languages
Chinese (zh)
Other versions
CN114299399B (en
Inventor
蒲养林
刘偲
袁茂洵
章黎明
李波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111363657.0A priority Critical patent/CN114299399B/en
Publication of CN114299399A publication Critical patent/CN114299399A/en
Application granted granted Critical
Publication of CN114299399B publication Critical patent/CN114299399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an aircraft target confirmation method based on a skeleton line relationship, which mainly comprises the following steps: s1, acquiring a real-time remote sensing image, performing self-adaptive binarization on the real-time remote sensing image, and representing an edge extraction result by using a binary image, wherein white pixel points are edge points, and black pixel points are non-edge points; s2, extracting a relationship of a skeleton line of the target of the airplane; s3, detecting key points of the airplane; and S4, correcting the confidence of the target detection result. The invention improves the interpretability and the certainty of the target detection result and has higher calculation efficiency.

Description

Aircraft target confirmation method based on skeleton line relation
Technical Field
The invention relates to the technical field of digital image processing, in particular to an airplane target confirmation method based on a skeleton line relation.
Background
In recent years, deep learning methods have been rapidly developed and are widely used in object detection. The target detection method based on deep learning mainly depends on a large amount of training data. And performing multi-layer feature extraction on the image through a plurality of convolution kernels, and finally obtaining the position and the category of the target. The parameters of the network are adjusted through a large amount of training data, so that the network model is more sensitive to certain pixel distributions, which correspond to certain geometric features of the target, such as straight lines, arcs, and the like. However, the detection process of the current target detection method based on deep learning is difficult to clearly explain, so that the detection result has uncertainty. When interference, shielding and the like exist in practical application, the apparent characteristics of the target are damaged to a certain extent, and when a certain difference exists between the apparent characteristics and the training data, the target is often missed to be detected, or the confidence of the detected target is low.
With the development of aerospace industry, remote sensing technology is more and more widely applied, and the demand for detecting and identifying targets from remote sensing images is increasing. The remote sensing images are typically taken from a vertically downward looking angle. Compared with daily horizontal visual angle images, the target in the remote sensing image has the characteristics of multiple directions and multiple scales. Remotely sensed images are often subject to a variety of disturbances. Firstly, as the shooting position is usually high, cloud and fog occlusion easily occurs in the visual field; secondly, because the visual angle is overlook, the visual field is mainly the ground or the sea surface, so that the detection of the target can be interfered by stripes, sundries on the ground, waves on the sea surface and the like; finally, the illumination environments in different time periods are different, the contrast of the image is also different, and the detection of the target is also influenced by the lower contrast.
Therefore, how to provide an aircraft target confirmation method based on skeleton line relationship, which can effectively output aircraft key points, is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides an aircraft target confirmation method based on a skeleton line relationship, aiming at a specific task of aircraft target detection in a remote sensing image, and aiming at improving the certainty of a detection result, through the analysis of the geometrical characteristics of an aircraft, the invention can confirm the detected aircraft target, modify the confidence coefficient of the detection result and provide effective key points for target tracking.
In order to achieve the purpose, the invention adopts the following technical scheme:
an aircraft target confirmation method based on a skeleton line relation comprises the following steps:
s1, acquiring a real-time remote sensing image, performing self-adaptive binarization on the real-time remote sensing image, and representing an edge extraction result by using a binary image, wherein white pixel points are edge points, and black pixel points are non-edge points;
s2, extracting a relationship of a skeleton line of the target of the airplane; the specific content comprises the following steps:
(1) scanning pixel points of the image line by line, and counting the number of the longest continuous white pixel points in each line, wherein the line with the largest number of continuous white pixel points in all lines is the line of the central axis of the body;
(2) dividing the image into two parts by the line of the central axis of the airplane body, taking a slope line by taking a point on the central axis of the airplane body as a starting point for each part, sequentially scanning according to the position sequence, counting the number of continuous white pixel points on the slope line, and taking the slope line with the largest number as the central axis of the current part of the wings; performing the same operation on the other part to obtain the central axis of the other part of the wing;
s3, detecting key points of the airplane; defining a key point as an acute angle point at the junction of the rear side of the wing and the fuselage, counting the number of continuous white pixel points on the row of each pixel point for each pixel point on the central axis of the wing, judging the junction of the wing and the fuselage of the current row when the number of the white pixel points of two adjacent rows has a sudden change, wherein the connecting line of the continuous white pixel points on the current row is the connecting line of the wing and the fuselage; on the cross-connecting line, searching a position which is changed from a white point to a black point for the first time from the white point to one side of the tail of the aircraft as a searched key point position;
s4, correcting the confidence of the target detection result; verifying the detected key points by using the known key point information in the template, and eliminating the key points with the deviation larger than a threshold value, otherwise, keeping; and assigning confidence degrees to each condition reserved for the key points, and carrying out weighted average on the assigned confidence degrees and the confidence degrees of target detection to obtain the final correction confidence degree.
Preferably, the specific steps of S1 are: converting the acquired real-time detection image into a gray-scale image, and converting the gray-scale image into a binary image, namely, the pixel point corresponding to the airplane target is white, and the pixel point corresponding to the background is black; wherein, the step of converting the gray scale map into a binary map comprises the following steps: adaptive Canny edge extraction, adaptive expansion, and fast hole filling.
Preferably, the specific steps of the adaptive Canny edge extraction include:
(1) after Gaussian smooth filtering is carried out on the image, multiplying 3 x 3 neighborhood points of each pixel point by Sobel operators to obtain gradients G in the horizontal direction and the vertical directionxAnd Gy(ii) a The Sobel operator includes:
Figure RE-GDA0003540896440000031
(2) calculating gradient magnitude G and direction θ:
Figure RE-GDA0003540896440000032
Figure RE-GDA0003540896440000033
(3) carrying out non-maximum suppression, and suppressing all gradients except the local maximum to be 0; comparing the gradient intensity of the current pixel with two pixels in the positive and negative gradient directions, if the gradient intensity of the current pixel is the maximum compared with the other two pixels, reserving the current pixel point as an edge point, otherwise, inhibiting the current pixel point;
(4) self-adaptively selecting a high threshold and a low threshold; calculating a histogram of the gradient amplitudes, selecting the gradient amplitudes accounting for a preset percentage of the total number of the histogram as a high threshold, and selecting half of the high threshold as a low threshold; if the gradient value of the edge pixel is higher than the high threshold value, marking the edge pixel as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, marking the edge pixel as a weak edge pixel; if the gradient value of the edge pixel is less than the low threshold, it is suppressed.
Preferably, the specific steps of the adaptive expansion include:
traversing the edge extraction result binary image, and expanding each white pixel point;
detecting whether other white pixel points exist in the vertical, left and right preset pixel distances of each white pixel point, and if so, setting the points between the current white pixel point and the detected white pixel points as white;
preferably, the fast hole filling adopts a fast hole filling method, and the method comprises the following specific steps:
(1) marking all black pixel points in the expanded binary image as an 'unchecked' state;
(2) performing first traversal, and judging whether each non-checked point is a background point or a suspected hole point; firstly, checking whether the upper, lower, left and right adjacent points have background points, if so, directly setting the current pixel point as the background point, and jumping to the next black pixel point for judgment; if not, checking whether white pixel points exist or not from the current pixel point to the upper, lower, left and right boundaries of the image, and if the white pixel points exist in the four directions of the current pixel point, setting the current pixel point as a suspected hole point; after the first traversal, all black pixels are marked as background points or suspected hole points;
(3) performing second traversal, and checking whether the upper, lower, left and right adjacent points have background points or not aiming at each suspected hole point, wherein if yes, the suspected hole point is set as the background point; if not, continuing to keep the suspected hole point;
(4) if the change condition of changing from the suspected hole point to the background point exists in the traversal process of the step (3), performing traversal again and repeating the step (3); until no 'suspected hole point' is changed into a 'background point' in the traversal process of the step (3), ending the second traversal; after the second traversal, all the remaining suspected hole points are actually real hole points;
(5) and traversing each suspected hole point for the third time, and directly changing the suspected hole points into white pixel points.
Preferably, before S2, the vertical and horizontal orientations of the airplane are determined, and then skeleton line extraction is performed on the two specific orientations; wherein the two directions are up and down or left and right; the specific direction judging method comprises the following steps:
the upper and lower symmetry and the left and right symmetry of the images are compared: the images are respectively folded in half up and down and folded in half left and right, the number of overlapped white points is calculated, and the larger white point is the direction of the symmetry axis of the images.
Preferably, in S4, the template parameters include the length-width ratio of the fuselage, the angle of inclination of the wing, the position of the intersection of the central axes of the fuselage and the wing, and the position of the key point.
Preferably, in S4, the detected keypoints are verified by using the known keypoint information in the template, and the keypoints with deviations greater than the threshold are excluded, otherwise, the specific contents retained include:
after S3 skeleton line extraction is completed, comparing the length-width ratio of the skeleton line with the length-width ratio of the template parameter, continuing the subsequent steps if the difference is within a preset range, and otherwise marking as low confidence;
aligning the intersection points of the detected skeleton lines with the intersection points of the skeleton lines of the template, and respectively comparing the position difference between the two detected key points and the key points of the template; and when the position difference is smaller than a preset threshold value, retaining the detection result of the current key point, otherwise, eliminating the detection error of the current key point.
Preferably, in S4, a confidence is assigned to each case reserved for the keypoint, and the weighted average of the assigned confidence and the confidence of the target detection is performed, and the specific content as the final correction confidence includes:
if both the two key points are reserved, marking the key points as high confidence degrees, if only one key point is reserved, marking the key points as medium confidence degrees, and if both the two key points are excluded, marking the key points as low confidence degrees;
carrying out weighted average on the aircraft target confirmation confidence coefficient and the target detection confidence coefficient to obtain a new confidence coefficient; in the total flow of target detection, whether the detection reports the result or not is determined according to the new confidence
According to the technical scheme, compared with the prior art, the skeleton line relation is effectively extracted through analysis of the geometrical characteristics of the airplane, the certainty of the detection result is effectively improved, the detected airplane target can be confirmed, the confidence coefficient of the detection result is modified, and effective key points are provided for target tracking.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic general flow chart of an aircraft target confirmation method based on a skeleton line relationship provided in the present invention.
Fig. 2 is a schematic flow chart of a fast hole filling method in an aircraft target confirmation method based on a skeleton line relationship according to the present invention;
FIG. 3 is a schematic diagram of a skeleton line relationship in an aircraft target identification method based on the skeleton line relationship according to the present invention;
FIG. 4 is a schematic diagram illustrating key point extraction in an aircraft target validation method based on a skeleton line relationship according to the present invention;
fig. 5 is a schematic diagram of parameters of an airplane template in an airplane target confirmation method based on a skeleton line relationship provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses an aircraft target confirmation method based on a skeleton line relationship, which comprises the following steps:
s1 real-time detection of adaptive edge extraction and dilation of images
As shown in fig. 1, the real-time detection image is a gray scale image. For facilitating subsequent processing, the gray-scale image needs to be converted into a binary image, that is, the pixel point corresponding to the aircraft target is white (the pixel value is 255), and the pixel point corresponding to the background is black (the pixel value is 0). The gray-scale image is converted into a binary image through three processes of self-adaptive Canny edge extraction, self-adaptive expansion and rapid hole filling.
1. And (4) self-adaptive Canny edge extraction, aiming at obtaining a set of points with larger pixel value change in the image. Firstly, Gaussian smooth filtering is carried out on the image, and then 3 × 3 neighborhood points of each pixel point are multiplied by Sobel operator (formula 1)
Figure RE-GDA0003540896440000061
Thereby obtaining a gradient G in the horizontal and vertical directionsxAnd Gy. And calculating the gradient amplitude and direction according to the formulas 2 and 3:
Figure RE-GDA0003540896440000062
Figure RE-GDA0003540896440000063
non-maxima suppression is then performed, suppressing all gradients outside the local maxima to 0. Specifically, the gradient strength of the current pixel is compared with two pixels in the positive and negative gradient directions, if the gradient strength of the current pixel is the largest compared with the other two pixels, the pixel point is reserved as an edge point, otherwise, the pixel point is suppressed. Then a high threshold and a low threshold are chosen. In order to adapt to the contrast of different gray level images, the high and low thresholds in the invention are not artificially designated, but are adaptively selected. Specifically, a histogram of gradient amplitudes is obtained, and gradient amplitudes occupying 70% of the total number of the histogram are selected as a high threshold (the proportion occupying the total number of the histogram can be selected according to actual conditions, in this embodiment, 70% is taken as an example), and half of the high threshold is selected as a low threshold. Finally, double threshold detection, i.e. if the gradient value of an edge pixel is higher than the high threshold, it is marked as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, marking it as a weak edge pixel; if the gradient value of the edge pixel is less than the low threshold, it is suppressed.
2. And (4) self-adaptive expansion. After edge extraction, the image is binarized, edge points are white pixel points, and non-edge points are black pixel points. Traversing the edge extraction result binary graph, expanding each white pixel, selecting the size of 3 × 3 for expansion in this embodiment, and reasonably selecting the expansion size in the practical application, wherein the specific method for expanding the size of 3 × 3 is as follows: if the pixel point at the (i, j) position is white, the pixel points at the (i-1, j-1), (i-1, j +1), (i, j-1), (i, j +1), (i +1, j-1), (i +1, j), and (i +1, j +1) positions are all set to be white. In order to fill the gap and enable the outline of the airplane to form a connected domain as much as possible without excessively damaging the appearance characteristics of the airplane, the adaptive expansion is needed. For each white pixel point, checking the distance (such as 5 pixels) from top to bottom, left to right, and if there is a white pixel point, setting the white pixel point to be white. Since the traversal order is from top to bottom and from left to right, only the left side and the upper side of the inspection point are needed for adaptive inflation, so as to reduce the calculation amount.
3. Fast hole filling
Through the foregoing processing, most of the white pixels on the aircraft target in the binary image form a connected domain, but there still exists a hole in the middle, as shown in fig. 1. For subsequent processing, the holes need to be filled. The flow of the fast hole filling method adopted by the invention is shown in fig. 2. The input to the method is the dilated binary map. The method is divided into three times of traversal, all black pixel points in the image are traversed every time, and all white pixel points are skipped. Initially, all black pixels are marked as "unchecked" state.
In the first pass, each "unchecked" dot is determined to be a "background dot" or a "suspected hole dot". Considering from the topological relation, when the neighboring point of a certain black pixel point is a "background point", the local point is in the same connected domain as the "background point", and therefore the local point also belongs to the "background point". Therefore, when a certain point is judged, whether the upper, lower, left and right adjacent points have the background point is firstly checked, if so, the point is directly set as the background point, and the next black pixel point is skipped to for judgment. If not, whether white pixel points exist from the point to the upper, lower, left and right boundaries of the image is checked. If there are white pixels in all four directions of the point, the point may be located in the hole. However, this is a necessary and insufficient condition, and therefore this point is set as a "suspected hole point". Experiments show that most black pixels actually occupy background points, and the background point attribute can be sequentially transmitted to adjacent points as described above, so that the judgment speed of most black pixels is high (only the adjacent points need to be observed), and the calculated amount is small. After the first traversal, all the black pixels are marked as background points or suspected hole points.
And during the second pass, for each suspected hole point, checking whether a background point exists in upper, lower, left and right adjacent points, and if so, setting the suspected hole point as the background point. If not, the "suspected hole point" is kept. If the change condition of changing from the suspected hole point to the background point exists in the whole traversal process, the traversal is carried out again, and the process is repeated. And ending the second traversal until no suspected hole point is changed into a background point in the whole traversal process. The same way as the first traversal, the second traversal of the "background point" attribute can propagate the adjacent points in sequence, so the calculation amount is small. After the second traversal, all the remaining "suspected hole points" are actually real hole points.
In the third time, each suspected hole point is traversed, and the suspected hole points are directly changed into white pixel points.
Through the process, the holes in the airplane target are filled at a high speed.
S2. skeleton line relation extraction
The airplane target in the remote sensing image has a more stable appearance characteristic, and the fuselage and the wings have a fixed geometric relationship. The relationship of skeleton lines formed by the central axes of the fuselage and the wings can better reflect the essential geometric characteristics of the airplane target, as shown in fig. 3. And the extraction of the skeleton line of the airplane target is beneficial to the confirmation of the airplane target and the detection of key points.
Taking the aircraft head towards the left side of the image as an example, fig. 3. Firstly, the image is scanned line by line, and the number of the longest continuous white dots in each line is counted. The continuous white dots refer to a section of pixel points which are adjacent in sequence and are all white. A continuous white dot constitutes a continuous white straight line segment. The row with the largest number of continuous white dots in all rows is the row of the central axis of the fuselage. When the number of continuous white dots of a plurality of lines is at most at the same time, the middle line of the plurality of lines is taken. The central axis of the machine body divides the image into an upper part and a lower part. And taking a slant line from the upper half part of the image by taking a point on the central axis of the machine body as a starting point, and scanning from left to right in sequence. And counting the number of continuous white points on the inclined lines, wherein the largest inclined line is the central axis of the upper half part of the wing. The same applies to the lower half. Since the orientation of the airplane head is not known in advance, the scanning can be performed in all four directions, i.e., the up, down, left, and right directions, and the obtained skeleton line with the largest number of white dots is the actual skeleton line, and the orientation of the airplane target can also be determined.
In an actual procedure, after obtaining an image of a target of an airplane detected in real time, the head of the airplane may face in one of the up, down, left and right directions. In order to reduce the calculation amount, the vertical and horizontal directions of the airplane can be judged, and then the skeleton line extraction is carried out on the specific two directions (up-down or left-right), so that the skeleton line extraction is avoided from being carried out on all four directions, and the calculation amount is reduced. The vertical and horizontal orientations are determined by comparing the vertical symmetry and the horizontal symmetry of the image. The images are respectively folded in half up and down and folded in half left and right, the number of overlapped white points is calculated, and the larger white point is the direction of the symmetry axis of the images. Experiments show that the method can reduce the calculated amount of skeleton line extraction by about 50%.
S3, preliminary detection of key points
Keypoints refer to certain locations in the aircraft object that have significant apparent features. The key point defined in the invention is an acute angle point on the boundary line of the wing and the fuselage, which is close to one side of the tail, as shown in fig. 4.
Taking fig. 4 as an example, after obtaining the skeleton line, each point of the wing central axis is traversed from the line where the fuselage central axis is located. For each point of the central axis of the wing, the length of the horizontal continuous white line where the point is located is counted. The length of the horizontal continuous white line is close to the central axis of the fuselage at the position close to the central axis of the fuselage. On the wing, the length of the horizontal continuous white line is significantly lower than the central axis of the fuselage. In combination with the geometrical features of the aircraft itself, there must be a sudden change in the length of the white line in the middle. And defining the position where the length of the white line is firstly lower than 50% of the length of the central axis of the fuselage as a connecting line of the wing and the fuselage. After obtaining the connecting line, searching the position of the first time changing from white point to black point at the tail side, namely the position of the searched key point.
S4, carrying out key point verification and airplane template confirmation by utilizing existing template parameters
The template parameters include the length-width ratio of the fuselage, the inclination angle of the wing, the intersection point position of the central axis of the fuselage and the wing, and the position of the key point, as shown in fig. 5. After the extraction of the skeleton lines is finished in the step 3, firstly, the aspect ratio of the skeleton lines is compared with the aspect ratio of the template parameters, if the difference is within a certain range, the subsequent steps are continued, and if not, the skeleton lines are marked as low confidence degrees. Secondly, aligning the intersection points of the detected skeleton lines with the intersection points of the template skeleton lines, and respectively comparing the position difference between the two detected key points and the template key points. When the position difference is smaller than a certain threshold value, the detection result of the key point is reserved, otherwise, the detection of the key point is considered to be wrong, and the key point is eliminated. If both keypoints are retained, it is labeled high confidence 1.0, if only one keypoint is retained, it is labeled medium confidence 0.8, if both keypoints are excluded, it is labeled low confidence 0.6. (the confidence value can be set according to actual conditions and is not unique.) the aircraft target confirmation confidence and the target detection confidence are weighted and averaged to obtain new confidence. And in the total flow of target detection, determining whether the detection reports a result or not according to the new confidence.
(1) The invention provides a skeleton line relation of an airplane target and an extraction method. The remote sensing image is a overlook visual angle, and the appearance of the airplane has more stable geometric characteristics. Specifically, the central axis of the fuselage and the central axes of the two wings form a relatively fixed geometric relationship. For example, the central axis of the wing and the central axis of the fuselage form a certain included angle, and the central axes of the two wings are axisymmetric with the central axis of the fuselage, etc. We refer to this stable mid-axis structure as a skeleton line relationship. The proposed skeleton line extraction method can effectively obtain the skeleton line relation of the airplane target, so as to verify the airplane target detection result;
(2) a method for confirming the detection result of an airplane target and correcting the confidence coefficient is provided. And after the skeleton line relation of the airplane target is extracted, further extracting key points of the airplane target. Key points refer to points on the fuselage that are characterized by some significance, such as the acute included angle at the connection of the wing to the fuselage. Airplanes of different models have different skeleton line relationships and key point positions. After an airplane of a certain model is detected, the extracted key point result is verified by using the related parameters of the existing airplane template, the decision of retaining or removing the key point result is made, and the confidence of the target detection result is further corrected.
The method for confirming the aircraft target based on the skeleton line relation is mainly provided for solving the problem of the unexplainability and uncertainty of visible light or near infrared aircraft target detection, and is also suitable for confirming the aircraft target in remote sensing images under other resolutions. Aiming at the airplane target in other visible light resolutions or other application scenes, the airplane target can be confirmed only by establishing a corresponding airplane target model and obtaining corresponding template parameters.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An aircraft target confirmation method based on a skeleton line relation is characterized by comprising the following steps:
s1, acquiring a real-time remote sensing image, performing self-adaptive binarization on the real-time remote sensing image, and representing an edge extraction result by using a binary image, wherein white pixel points are edge points, and black pixel points are non-edge points;
s2, extracting a relationship of a skeleton line of the target of the airplane; the specific content comprises the following steps:
(1) scanning pixel points of the image line by line, and counting the number of the longest continuous white pixel points in each line, wherein the line with the largest number of continuous white pixel points in all lines is the line of the central axis of the body;
(2) dividing the image into two parts by the line of the central axis of the airplane body, taking a slope line by taking a point on the central axis of the airplane body as a starting point for each part, sequentially scanning according to the position sequence, counting the number of continuous white pixel points on the slope line, and taking the slope line with the largest number as the central axis of the current part of the wings; performing the same operation on the other part to obtain the central axis of the other part of the wing;
s3, detecting key points of the airplane; defining a key point as an acute angle point at the junction of the rear side of the wing and the fuselage, counting the number of continuous white pixel points on the row of each pixel point for each pixel point on the central axis of the wing, judging the junction of the wing and the fuselage of the current row when the number of the white pixel points of two adjacent rows has a sudden change, wherein the connecting line of the continuous white pixel points on the current row is the connecting line of the wing and the fuselage; on the cross-connecting line, searching a position which is changed from a white point to a black point for the first time from the white point to one side of the tail of the aircraft as a searched key point position;
s4, correcting the confidence of the target detection result; verifying the detected key points by using the known key point information in the template, and eliminating the key points with the deviation larger than a threshold value, otherwise, keeping; and assigning confidence degrees to each condition reserved for the key points, and carrying out weighted average on the assigned confidence degrees and the confidence degrees of target detection to obtain the final correction confidence degree.
2. The method for confirming the airplane target based on the skeleton line relationship of claim 1, wherein the step S1 comprises the following steps: converting the acquired real-time detection image into a gray-scale image, and converting the gray-scale image into a binary image, namely, the pixel point corresponding to the airplane target is white, and the pixel point corresponding to the background is black; wherein, the step of converting the gray scale map into a binary map comprises the following steps: adaptive Canny edge extraction, adaptive expansion, and fast hole filling.
3. The method for confirming the airplane target based on the skeleton line relationship as claimed in claim 2, wherein the specific step of self-adapting Canny edge extraction comprises:
(1) after Gaussian smooth filtering is carried out on the image, multiplying 3 x 3 neighborhood points of each pixel point by Sobel operators to obtain horizontal and vertical pointsGradient G of directionxAnd Gy(ii) a The Sobel operator includes:
Figure FDA0003360160540000021
(2) calculating gradient magnitude G and direction θ:
Figure FDA0003360160540000022
Figure FDA0003360160540000023
(3) carrying out non-maximum suppression, and suppressing all gradients except the local maximum to be 0; comparing the gradient intensity of the current pixel with two pixels in the positive and negative gradient directions, if the gradient intensity of the current pixel is the maximum compared with the other two pixels, reserving the current pixel point as an edge point, otherwise, inhibiting the current pixel point;
(4) self-adaptively selecting a high threshold and a low threshold; calculating a histogram of the gradient amplitudes, selecting the gradient amplitudes accounting for a preset percentage of the total number of the histogram as a high threshold, and selecting half of the high threshold as a low threshold; if the gradient value of the edge pixel is higher than the high threshold value, marking the edge pixel as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, marking the edge pixel as a weak edge pixel; if the gradient value of the edge pixel is less than the low threshold, it is suppressed.
4. The method for confirming the target of the airplane based on the skeleton line relationship as claimed in claim 2, wherein the adaptive inflation comprises the following specific steps:
traversing the edge extraction result binary image, and expanding each white pixel point;
and detecting whether other white pixel points exist in the vertical, left and right preset pixel distances of each white pixel point, and if so, setting the points between the current white pixel point and the detected white pixel points as white.
5. The method for confirming the airplane target based on the skeleton line relationship as claimed in claim 2, wherein the fast hole filling adopts a fast hole filling method, and the specific steps include:
(1) marking all black pixel points in the expanded binary image as an 'unchecked' state;
(2) performing first traversal, and judging whether each non-checked point is a background point or a suspected hole point; firstly, checking whether the upper, lower, left and right adjacent points have background points, if so, directly setting the current pixel point as the background point, and jumping to the next black pixel point for judgment; if not, checking whether white pixel points exist or not from the current pixel point to the upper, lower, left and right boundaries of the image, and if the white pixel points exist in the four directions of the current pixel point, setting the current pixel point as a suspected hole point; after the first traversal, all black pixels are marked as background points or suspected hole points;
(3) performing second traversal, and checking whether the upper, lower, left and right adjacent points have background points or not aiming at each suspected hole point, wherein if yes, the suspected hole point is set as the background point; if not, continuing to keep the suspected hole point;
(4) if the change condition of changing from the suspected hole point to the background point exists in the traversal process of the step (3), performing traversal again and repeating the step (3); until no 'suspected hole point' is changed into a 'background point' in the traversal process of the step (3), ending the second traversal; after the second traversal, all the remaining suspected hole points are actually real hole points;
(5) and traversing each suspected hole point for the third time, and directly changing the suspected hole points into white pixel points.
6. The method for confirming the airplane target based on the skeleton line relationship according to claim 1, wherein before the step S2, the vertical and horizontal orientations of the airplane are judged, and then skeleton line extraction is performed on the two specific orientations; wherein the two directions are up and down or left and right; the specific direction judging method comprises the following steps:
the upper and lower symmetry and the left and right symmetry of the images are compared: the images are respectively folded in half up and down and folded in half left and right, the number of overlapped white points is calculated, and the larger white point is the direction of the symmetry axis of the images.
7. The method for confirming the aircraft target based on the skeleton line relationship of claim 1, wherein in S4, the template parameters include a fuselage length-width ratio, a wing inclination angle, a fuselage wing central axis intersection position and a key point position.
8. The method for confirming the aircraft target based on the skeleton line relationship of claim 1, wherein the detected key points are verified by using the known key point information in the template in S4, and the key points with the deviation larger than the threshold are excluded, otherwise the retained specific contents include:
after S3 skeleton line extraction is completed, comparing the length-width ratio of the skeleton line with the length-width ratio of the template parameter, continuing the subsequent steps if the difference is within a preset range, and otherwise marking as low confidence;
aligning the intersection points of the detected skeleton lines with the intersection points of the skeleton lines of the template, and respectively comparing the position difference between the two detected key points and the key points of the template; and when the position difference is smaller than a preset threshold value, retaining the detection result of the current key point, otherwise, eliminating the detection error of the current key point.
9. The method for confirming the aircraft target based on the skeleton line relationship according to claim 1, wherein a confidence is assigned to each case reserved for the keypoint in S4, and the weighted average of the assigned confidence and the confidence of the target detection is performed as the specific content of the final correction confidence, which includes:
if both the two key points are reserved, marking the key points as high confidence degrees, if only one key point is reserved, marking the key points as medium confidence degrees, and if both the two key points are excluded, marking the key points as low confidence degrees;
carrying out weighted average on the aircraft target confirmation confidence coefficient and the target detection confidence coefficient to obtain a new confidence coefficient; and in the total flow of target detection, determining whether the detection reports a result or not according to the new confidence.
CN202111363657.0A 2021-11-17 2021-11-17 Aircraft target confirmation method based on skeleton line relation Active CN114299399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111363657.0A CN114299399B (en) 2021-11-17 2021-11-17 Aircraft target confirmation method based on skeleton line relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111363657.0A CN114299399B (en) 2021-11-17 2021-11-17 Aircraft target confirmation method based on skeleton line relation

Publications (2)

Publication Number Publication Date
CN114299399A true CN114299399A (en) 2022-04-08
CN114299399B CN114299399B (en) 2024-06-11

Family

ID=80966513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111363657.0A Active CN114299399B (en) 2021-11-17 2021-11-17 Aircraft target confirmation method based on skeleton line relation

Country Status (1)

Country Link
CN (1) CN114299399B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118052997A (en) * 2024-04-16 2024-05-17 北京航空航天大学 Target confirmation method embedded with physical characteristics and common sense

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004922A (en) * 2010-12-01 2011-04-06 南京大学 High-resolution remote sensing image plane extraction method based on skeleton characteristic
CN103617328A (en) * 2013-12-08 2014-03-05 中国科学院光电技术研究所 Aircraft three-dimensional attitude calculation method
CN110018170A (en) * 2019-04-15 2019-07-16 中国民航大学 A kind of small-sized damage positioning method of aircraft skin based on honeycomb moudle
CN110443201A (en) * 2019-08-06 2019-11-12 哈尔滨工业大学 The target identification method merged based on the shape analysis of multi-source image joint with more attributes
CN110726720A (en) * 2019-11-07 2020-01-24 昆明理工大学 Method for detecting suspended substances in drinking mineral water
CN111368603A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Airplane segmentation method and device for remote sensing image, readable storage medium and equipment
WO2020224424A1 (en) * 2019-05-07 2020-11-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer readable storage medium, and computer device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004922A (en) * 2010-12-01 2011-04-06 南京大学 High-resolution remote sensing image plane extraction method based on skeleton characteristic
CN103617328A (en) * 2013-12-08 2014-03-05 中国科学院光电技术研究所 Aircraft three-dimensional attitude calculation method
CN111368603A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Airplane segmentation method and device for remote sensing image, readable storage medium and equipment
CN110018170A (en) * 2019-04-15 2019-07-16 中国民航大学 A kind of small-sized damage positioning method of aircraft skin based on honeycomb moudle
WO2020224424A1 (en) * 2019-05-07 2020-11-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110443201A (en) * 2019-08-06 2019-11-12 哈尔滨工业大学 The target identification method merged based on the shape analysis of multi-source image joint with more attributes
CN110726720A (en) * 2019-11-07 2020-01-24 昆明理工大学 Method for detecting suspended substances in drinking mineral water

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118052997A (en) * 2024-04-16 2024-05-17 北京航空航天大学 Target confirmation method embedded with physical characteristics and common sense
CN118052997B (en) * 2024-04-16 2024-08-16 北京航空航天大学 Target confirmation method embedded with physical characteristics and common sense

Also Published As

Publication number Publication date
CN114299399B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN108898610A (en) A kind of object contour extraction method based on mask-RCNN
CN109583365B (en) Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve
KR102181861B1 (en) System and method for detecting and recognizing license plate
CN110309808B (en) Self-adaptive smoke root node detection method in large-scale space
CN110766016B (en) Code-spraying character recognition method based on probabilistic neural network
WO2018086233A1 (en) Character segmentation method and device, and element detection method and device
CN113205063A (en) Visual identification and positioning method for defects of power transmission conductor
CN110956078B (en) Power line detection method and device
CN109993758A (en) Dividing method, segmenting device, computer equipment and storage medium
CN116740072B (en) Road surface defect detection method and system based on machine vision
CN115760820A (en) Plastic part defect image identification method and application
CN109238268A (en) The optimal external ellipses recognition method of irregular small feature loss navigation centroid
CN110866915A (en) Circular inkstone quality detection method based on metric learning
CN111192194A (en) Panoramic image splicing method for curtain wall building vertical face
CN115018846A (en) AI intelligent camera-based multi-target crack defect detection method and device
CN114299399B (en) Aircraft target confirmation method based on skeleton line relation
CN106815851B (en) A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement
CN114004858A (en) Method and device for identifying aviation cable surface code based on machine vision
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN111444773A (en) Image-based multi-target segmentation identification method and system
CN114140484A (en) High-robustness sea-sky-line extraction method based on photoelectric sensor
JP2005524220A (en) Method for automatically defining partial models
CN105740796B (en) Lane line image binaryzation method after a kind of perspective transform based on grey level histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant