CN116229084A - Empty target detection method - Google Patents

Empty target detection method Download PDF

Info

Publication number
CN116229084A
CN116229084A CN202211727480.2A CN202211727480A CN116229084A CN 116229084 A CN116229084 A CN 116229084A CN 202211727480 A CN202211727480 A CN 202211727480A CN 116229084 A CN116229084 A CN 116229084A
Authority
CN
China
Prior art keywords
image
target
response value
candidate
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211727480.2A
Other languages
Chinese (zh)
Inventor
曾钦勇
刘圣杰
尹小杰
李双龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Haofu Technology Co ltd
Original Assignee
Chengdu Haofu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Haofu Technology Co ltd filed Critical Chengdu Haofu Technology Co ltd
Priority to CN202211727480.2A priority Critical patent/CN116229084A/en
Publication of CN116229084A publication Critical patent/CN116229084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a method for detecting an empty target, which comprises the following steps: preprocessing an input image; calculating the gradient of the processed image; removing ground scenes in the images; detecting corner points of the sky area; non-maximum suppression; selecting target candidate points; performing significance segmentation on target candidate points to extract targets; target deduplication. The invention has small calculated amount and can process the image with the resolution of 1920 x 1080 in real time. The invention effectively suppresses the interference of ground scenes and cloud layer backgrounds on target detection by utilizing the significance of gradient information and features in the pictures, extracts the targets in a segmentation mode, performs de-duplication treatment on the obtained targets, ensures the integrity and uniqueness of the final detection result, and improves the accuracy of empty target detection.

Description

Empty target detection method
Technical Field
The invention relates to the field of computer vision, in particular to a method for detecting an empty target.
Background
The method has important research significance and application value in the fields of air defense weapon, air reconnaissance and the like. In practical use, it is desirable to find the target as early as possible, and at this time, the target is far from the observation point, and the imaging size of the target in the picture is small, even only one or two pixels. Such an application scenario is not advantageous for identifying targets based on a deep learning method. The traditional empty target detection methods are generally based on brightness information, edge information and the like, but the methods are easily interfered by ground scenes and cloud layer backgrounds, so that the problems of high false alarm rate, incomplete detected targets and the like are caused.
Disclosure of Invention
Accordingly, in order to solve the above-mentioned disadvantages, the present invention provides a method of
The method for detecting the empty target solves the problems that in the prior art, the empty target is easily interfered by ground scenes and cloud layer backgrounds and the detected target is incomplete.
The invention is realized by the following steps:
step one, preprocessing an image;
if the input image is a color image, it is converted into a gray image, and then the gray image is median-filtered. Pixels of one circle of image boundary retain the original gray value, and other pixels select gray intermediate values from the 3×3 area with the pixels as the center as the final gray value.
The color image is converted into a gray image as follows:
I=0.299×R+0.587×G+0.114×B
where R, G, B represents the red, green and blue color components of the color image, respectively, and I represents the converted gray value.
The median filtering process is as follows:
Figure BDA0004030777550000021
wherein W, H respectively represent the width and height of the image, i, j respectively represent the positions of rows and columns in the gray scale image, and i.epsilon.1, 2, …, H],j∈[1,2,…,W]。I i,j Representing the original gray value, mean, at the i, j position in the image]Representing a slave imageThe operation of obtaining the intermediate gray value in the pixel set,
Figure BDA0004030777550000022
representing the median filtered gray values at the i, j positions in the image.
Step two, gradient calculation;
and (3) respectively calculating the horizontal gradient and the vertical gradient of the median filtered image obtained in the step (I). The specific operation is that the horizontal gradient and the vertical gradient of the pixels of a circle of image boundary are set to 0, and other pixels use an operator [ -1 0 1]Solving the gradient in the horizontal direction by using an operator [ -1 0 1] T And (5) solving the gradient in the vertical direction.
The specific calculation process is as follows:
Figure BDA0004030777550000023
Figure BDA0004030777550000024
wherein W, H respectively represent the width and height of the image, i, j respectively represent the positions of rows and columns in the gray scale image, and i.epsilon.1, 2, …, H],j∈[1,2,…,W]。
Figure BDA0004030777550000025
Representing median-filtered gray values at i, j positions in an image, gradx i,j And grady i,j Respectively representing the horizontal and vertical gradients at the i, j position in the image.
Thirdly, removing ground scenes;
the invention aims at detecting an empty target, but in actual use, a ground scene is inevitably generated in a picture. The ground scene often contains complex texture features, which can cause interference to the detection of subsequent real air targets, so that the ground scene needs to be removed. According to the actual use scene, the ground scene has more complex textures than the air scene, and the ground scene is intensively distributed below the picture. Therefore, the ground scene can be removed by using the texture information and the position information.
The specific operation is that the sum of absolute values of two directional gradients on each line of the statistical image is larger than a set threshold T by utilizing the horizontal directional gradient and the vertical directional gradient obtained in the second step G Number of pixels N j ,j∈[1,2,…,H]Where j represents a line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented. Then searching from top to bottom according to the line number to satisfy the condition N j ≥T,j∈[1,2,…,H]Wherein j represents the line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented, and T represents a set demarcation threshold value.
Fourthly, detecting corner points of the area where the sky scene is located;
and (3) calculating the angular point response value of each pixel point in the region according to the following formula by utilizing the horizontal gradient and the vertical gradient obtained in the step two.
Figure BDA0004030777550000031
S j =det(M j )-k×(trace(M j )) 2
Wherein j represents the serial number of the pixel point in the image, B j Pixel point set representing 7×7 region centered on pixel point j in image, gradx and grady representing horizontal and vertical gradients of image, respectively, M j Representing the gradient covariance matrix at pixel j, det () representing the determinant of the matrix, trace () representing the trace of the matrix, k being an adjustable parameter, S j Is the corner response value at pixel j.
Fifthly, non-maximum value inhibition;
performing non-maximum value suppression operation on the corner response value obtained in the step four, wherein the specific operation is to give a response minimum threshold S min For a certain pixel point, the corner response value and the minimum value are compared at the same timeThreshold S min And the corner response values of the pixels in its 7 x 7 neighborhood. If the corner response value of the pixel point cannot simultaneously meet the requirement of S or more min And the angular point response value of the pixel point in the 7 multiplied by 7 neighborhood is larger than or equal to the angular point response value of the pixel point, and the angular point response value of the pixel point is 0.
The specific formula is as follows:
Figure BDA0004030777550000032
/>
wherein S is j Represents the corner response value at pixel j, S min Is the minimum corner response value, B j Representing pixels within a 7 x 7 neighborhood at pixel j,
Figure BDA0004030777550000041
corner response value set representing pixel points in 7 x 7 neighborhood, +.>
Figure BDA0004030777550000042
And representing the non-maximum value suppression result of the corner response value at the pixel point j.
Step six, selecting target candidate points;
comprehensively considering the response value of the result obtained in the step five and the position of the response value in the image, selecting points with N big before the response value as target candidate points, wherein N represents the number of the selected candidate points, and the specific numerical value is determined according to the use.
Step seven, taking each candidate point as a center, intercepting an area, and carrying out image segmentation;
and D, taking the candidate points obtained in the step six as the center, intercepting a proper area from the median filtered image obtained in the step two, and then performing saliency segmentation on the intercepted image to obtain a candidate target from each candidate point.
The specific operation for each candidate point segmentation is as follows:
(1) According to the approximate size of the target under the actual use condition, intercepting a candidate target area from the image by taking a candidate point as the center, setting three intercepting scale factors in order to ensure that the intercepting area can completely cover the target, and intercepting the area under the three scale factors from the candidate point to be used as the candidate target area;
(2) Obtaining a segmentation threshold value of each region;
(3) Binarizing the candidate target area according to the segmentation threshold value obtained in the step (2);
(4) Carrying out connected domain analysis on the binarized image obtained in the step (3) to obtain the position and the size of the target in each region;
(5) Screening a proper target from the target set obtained in the step (4) for each area by utilizing the appearance characteristics of the targets;
(6) And synthesizing target screening results of three intercepting scale factor target areas to obtain final target positions and sizes of the candidate points.
Step eight, removing the weight of the candidate target;
because the candidate points obtained in the step six may be points of different positions of the same target, and the targets segmented by the different candidate points in the step seven may be the same target, the target set in the step seven needs to be subjected to deduplication. And D, judging whether the targets belong to the same target according to the relationship between the position and the size of the candidate targets obtained by each candidate point in the step seven, and if the two targets belong to the same target, removing one target with smaller size.
The invention has the beneficial effects that: the gradient information of the scene in the picture is utilized to distinguish the ground area from the sky area, then the object is extracted from the sky area by utilizing the characteristic saliency of the object, the interference of the background such as the ground scene and the cloud layer on the object detection is effectively restrained, finally the position and the size of the object are obtained by utilizing saliency segmentation, the object is subjected to de-duplication processing according to the position and the size relation between candidate objects, the integrity and the uniqueness of a final detection result are ensured, and the accuracy of empty object detection is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a gray scale image to be detected;
in fig. 3, (a) is a horizontal gradient, and (b) is a vertical gradient;
fig. 4 shows the detected corner positions of the first five rows;
FIG. 5 is a partially enlarged view of the corner positions of the first five detected rows;
FIG. 6 is a graph of a target segmentation result;
fig. 7 shows the final target detection result.
Detailed Description
The following detailed description of the present invention will provide clear and complete description of the technical solutions of the embodiments of the present invention, with reference to fig. 1-7, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this embodiment, the experimental data is an image of an aerial drone target taken on the ground, with the image being 1920 a wide and 1080 a high.
This example is carried out in particular by the following steps:
s1, preprocessing an image;
the color image is converted into a gray image, and then the gray image is subjected to a median filtering operation. Pixels of one circle of image boundary retain the original gray value, and other pixels select gray intermediate values from the 3×3 area with the pixels as the center as the final gray value.
The color image is converted into a gray image as follows:
I=0.299×R+0.587×G+0.114×B
where R, G, B represents the red, green and blue color components of the color image, respectively, and I represents the converted gray value.
The median filtering process is as follows:
Figure BDA0004030777550000061
wherein W and H respectively represent the width of the imageThe degree and height, i, j, represent the positions of rows and columns, respectively, in the gray image, and i E [1,2, …, H],j∈[1,2,…,W]。I i,j Representing the original gray value, mean, at the i, j position in the image]An operation of obtaining an intermediate gradation value from the set of pixel points is shown,
Figure BDA0004030777550000062
representing the median filtered gray values at the i, j positions in the image.
Step S2, gradient calculation;
and (3) respectively calculating the horizontal gradient and the vertical gradient of the median filtered image obtained in the step (S1). The specific operation is that the horizontal gradient and the vertical gradient of the pixels of a circle of image boundary are set to 0, and other pixels use an operator [ -1 0 1]Solving the gradient in the horizontal direction by using an operator [ -1 0 1] T And (5) solving the gradient in the vertical direction.
The specific calculation process is as follows:
Figure BDA0004030777550000063
Figure BDA0004030777550000064
wherein W, H respectively represent the width and height of the image, i, j respectively represent the positions of rows and columns in the gray scale image, and i.epsilon.1, 2, …, H],j∈[1,2,…,W]。
Figure BDA0004030777550000065
Representing median-filtered gray values at i, j positions in an image, gradx i,j And grady i,j Respectively representing the horizontal and vertical gradients at the i, j position in the image.
S3, removing ground scenes;
using the horizontal gradient and the vertical gradient obtained in the step S2 to count that the sum of absolute values of the two gradients on each line of the image is larger than a set threshold T G Pixel point of =100Number N j ,j∈[1,2,…,H]Where j represents a line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented. Then searching from top to bottom according to the line number to satisfy the condition N j ≥T,j∈[1,2,…,H]Wherein j represents the line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented, and t=70 represents a set demarcation threshold value.
Step S4, detecting corner points of the area where the sky scene is located;
and (3) calculating the angular point response value of each pixel point in the region according to the following formula by utilizing the horizontal gradient and the vertical gradient obtained in the step S2.
Figure BDA0004030777550000071
S j =det(M j )-k×(trace(M j )) 2
Wherein j represents the serial number of the pixel point in the image, B j Pixel point set representing 7×7 region centered on pixel point j in image, gradx and grady representing horizontal and vertical gradients of image, respectively, M j Representing the gradient covariance matrix at pixel j, det () representing the determinant of the matrix, trace () representing the trace of the matrix, k being an adjustable parameter, S j Is the corner response value at pixel j.
S5, non-maximum suppression;
performing non-maximum value suppression operation on the corner response value obtained in the step S4, wherein the specific operation is to give a response minimum threshold S min For a pixel, the corner response value of the pixel is compared with the minimum threshold value S min And the corner response values of the pixels in its 7 x 7 neighborhood. If the corner response value of the pixel point cannot simultaneously meet the requirement of S or more min And the angular point response value of the pixel point in the 7 multiplied by 7 neighborhood is larger than or equal to the angular point response value of the pixel point, and the angular point response value of the pixel point is 0.
The specific formula is as follows:
Figure BDA0004030777550000081
wherein S is j Represents the corner response value at pixel j, S min Is the minimum corner response value, B j Representing pixels within a 7 x 7 neighborhood at pixel j,
Figure BDA0004030777550000082
corner response value set representing pixel points in 7 x 7 neighborhood, +.>
Figure BDA0004030777550000083
And representing the non-maximum value suppression result of the corner response value at the pixel point j.
S6, selecting target candidate points;
comprehensively considering the response value of the result obtained in the step S5 and the position of the response value in the image, and selecting a point with the value 5 before the response value as a candidate point of the target.
S7, taking each candidate point as a center, intercepting an area, and carrying out image segmentation;
and (3) taking the candidate points obtained in the step (S6) as the center, intercepting a proper area from the median filtered image obtained in the step (S2), and then performing saliency segmentation on the intercepted image to obtain a candidate target from each candidate point.
The specific operation for each candidate point segmentation is as follows:
(1) According to the approximate size of the target under the actual use condition, intercepting a candidate target area from the image by taking a candidate point as the center, setting three intercepting scale factors for ensuring that the intercepting area can completely cover the target, and intercepting the area under 1,2 and 4 times of downsampling rate from the candidate point to be taken as the candidate target area;
(2) Obtaining a segmentation threshold value of each region;
(3) Binarizing the candidate target area according to the segmentation threshold value obtained in the step (2);
(4) Carrying out connected domain analysis on the binarized image obtained in the step (3) to obtain the position and the size of the target in each region;
(5) Screening a proper target from the target set obtained in the step (4) for each area by utilizing the appearance characteristics of the targets;
(6) And synthesizing target screening results of three intercepting scale factor target areas to obtain final target positions and sizes of the candidate points.
Step S8, removing the weight of the candidate target;
and judging whether the targets belong to the same target according to the relationship between the position and the size of the candidate targets obtained by each candidate point in the step S7, and if the two targets belong to the same target, removing one target with smaller size.
The present invention is not limited to the above-described embodiments, but is not limited to the present invention, and is not related to the present invention in any way. Various modifications and variations may be made by those skilled in the art in light of the teachings of this invention without departing from the spirit or essential scope thereof, and such modifications and variations are intended to be included within the scope of the invention as defined in the following claims.

Claims (9)

1. The method for detecting the empty target is characterized by comprising the following steps of:
s1, preprocessing an image, wherein the image is preprocessed,
carrying out gray processing on an input image, and carrying out median filtering operation on the image subjected to the gray processing;
s2, calculating the gradient,
respectively calculating the horizontal gradient and the vertical gradient of the median filtered image obtained in the step S1;
s3, removing the ground scene,
removing features which cause interference to the detection of the air target in the ground scene;
s4, detecting corner points of the area where the sky scene is located,
calculating the angular point response value of each pixel point in the region by utilizing the horizontal gradient and the vertical gradient obtained in the step S2;
s5, non-maximum value inhibition,
performing non-maximum value suppression operation on the corner response value obtained in the step S4,
s6, selecting a target candidate point,
comprehensively considering the response value of the result obtained in the step S5 and the position of the response value in the image, and selecting a candidate point from the response value;
s7, taking each candidate point as a center, intercepting an area, and carrying out image segmentation;
taking the candidate points obtained in the step S6 as the center, intercepting a proper area from the median filtered image obtained in the step S2, and then performing saliency segmentation on the intercepted image to obtain a candidate target from each candidate point;
s8, removing the duplication of the candidate target.
2. The method for detecting an empty target according to claim 1, wherein the specific method for preprocessing the image in step S1 is as follows:
the input image is a color image, then the color image is converted into a gray image, and then median filtering operation is carried out on the gray image; pixels of one circle of image boundary keep the original gray value, and other pixel points select gray intermediate values from a 3 multiplied by 3 area taking the pixel points as the center as the final gray value;
the color image is converted into a gray image as follows:
I=0.299×R+0.587×G+0.114×B
wherein R, G, B represents red, green and blue color components of the color image, respectively, and I represents the converted gray value;
the median filtering process is as follows:
Figure FDA0004030777540000021
wherein W, H respectively represent the width and height of the image, i, j respectively represent the positions of rows and columns in the gray scale image, and i.epsilon.1, 2, …, H],j∈[1,2,…,W],I i,j Representing the original gray value, med, at the i, j position in the imageian[]An operation of obtaining an intermediate gradation value from the set of pixel points is shown,
Figure FDA0004030777540000022
the representation of i in the image is given,
median filtered gray values at j positions.
3. The method for detecting an empty target according to claim 2, wherein the specific method for calculating the gradient in step S2 is as follows:
setting the horizontal gradient and the vertical gradient of the pixels of one circle of image boundary as 0, and using operators [ -1 0-1 to other pixel points]Solving the gradient in the horizontal direction by using an operator [ -1 0 1] T Solving a gradient in the vertical direction;
the specific calculation process is as follows:
Figure FDA0004030777540000023
Figure FDA0004030777540000024
wherein W, H respectively represent the width and height of the image, i, j respectively represent the positions of rows and columns in the gray scale image, and i.epsilon.1, 2, …, H],j∈[1,2,…,W],
Figure FDA0004030777540000025
Representing median-filtered gray values at i, j positions in an image, gradx i,j And grady i,j Respectively representing the horizontal and vertical gradients at the i, j position in the image.
4. The method for detecting an empty target according to claim 3, wherein the specific method for removing the ground scene in step S3 is as follows:
using the horizontal and vertical gradients obtained in step S2The sum of absolute values of two directional gradients on each line of the statistical image is greater than a set threshold T G Number of pixels N j ,j∈[1,2,…,H]Where j represents a line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented. Then searching from top to bottom according to the line number to satisfy the condition N j ≥T,j∈[1,2,…,H]Wherein j represents the line number, H represents the height of the image, N j The number of pixels satisfying the condition in the j-th row is represented, and T represents a set demarcation threshold value.
5. The method for detecting an empty object according to claim 4, wherein the specific method for detecting the corner of the area where the sky scene is located in step S4 is as follows:
calculating the angular point response value of each pixel point in the region according to the following formula by utilizing the horizontal gradient and the vertical gradient obtained in the step S2;
Figure FDA0004030777540000031
S j =det(M j )-k×(trace(M j )) 2
wherein j represents the serial number of the pixel point in the image, B j Pixel point set representing 7×7 region centered on pixel point j in image, gradx and grady representing horizontal and vertical gradients of image, respectively, M j Representing the gradient covariance matrix at pixel j, det () representing the determinant of the matrix, trace () representing the trace of the matrix, k being an adjustable parameter, S j Is the corner response value at pixel j.
6. The method for detecting an empty target according to claim 5, wherein the specific method for non-maximum suppression in step S5 is as follows:
performing non-maximum value suppression operation on the corner response value obtained in the step four, wherein the specific operation is given oneMinimum response threshold S min For a certain pixel point, the angular point response value is compared with the minimum threshold S min And the corner response value of the pixel point in the 7 multiplied by 7 neighborhood, if the corner response value of the pixel point can not meet the requirement of S or more simultaneously min And the angular point response value of the pixel point in the 7 multiplied by 7 neighborhood is larger than or equal to the angular point response value of the pixel point, and the angular point response value of the pixel point is 0;
the specific formula is as follows:
Figure FDA0004030777540000041
wherein S is j Represents the corner response value at pixel j, S min Is the minimum corner response value, B j Representing pixels within a 7 x 7 neighborhood at pixel j,
Figure FDA0004030777540000042
corner response value set representing pixel points in 7 x 7 neighborhood, +.>
Figure FDA0004030777540000043
And representing the non-maximum value suppression result of the corner response value at the pixel point j.
7. The method for detecting an empty target according to claim 6, wherein the specific method for selecting the target candidate point in step S6 is as follows:
comprehensively considering the response value of the result obtained in the step S5 and the position of the response value in the image, selecting N points with the front N of the response value as target candidate points, wherein N represents the number of the selected candidate points, and the specific numerical value is determined according to the use.
8. The method for detecting an empty object according to claim 7, wherein the specific operation of dividing each candidate point in step S7 is as follows:
s71, according to the approximate size of the target under the actual use condition, intercepting a candidate target area from the image by taking a candidate point as the center, setting three intercepting scale factors in order to ensure that the intercepting area can completely cover the target, and intercepting the area under the three scale factors from the candidate point as the candidate target area;
s72, obtaining segmentation threshold values of all areas;
s73, binarizing the candidate target area according to the segmentation threshold value obtained in the step S72;
s74, carrying out connected domain analysis on the binarized image obtained in the step S73 to obtain the position and the size of the target in each region;
s75, screening a proper target from the target set obtained in the step S74 for each region by utilizing the appearance characteristics of the targets;
s76, synthesizing target screening results of the three intercepting scale factor target areas to obtain final target positions and sizes of the candidate points.
9. The method for detecting an empty target according to claim 8, wherein the specific method of step S8 is as follows:
and judging whether the targets belong to the same target according to the relationship between the position and the size of the candidate targets obtained by each candidate point in the step S7, and if the two targets belong to the same target, removing one target with smaller size.
CN202211727480.2A 2022-12-30 2022-12-30 Empty target detection method Pending CN116229084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211727480.2A CN116229084A (en) 2022-12-30 2022-12-30 Empty target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211727480.2A CN116229084A (en) 2022-12-30 2022-12-30 Empty target detection method

Publications (1)

Publication Number Publication Date
CN116229084A true CN116229084A (en) 2023-06-06

Family

ID=86575936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211727480.2A Pending CN116229084A (en) 2022-12-30 2022-12-30 Empty target detection method

Country Status (1)

Country Link
CN (1) CN116229084A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078682A (en) * 2023-10-17 2023-11-17 山东省科霖检测有限公司 Large-scale grid type air quality grade accurate assessment method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078682A (en) * 2023-10-17 2023-11-17 山东省科霖检测有限公司 Large-scale grid type air quality grade accurate assessment method
CN117078682B (en) * 2023-10-17 2024-01-19 山东省科霖检测有限公司 Large-scale grid type air quality grade accurate assessment method

Similar Documents

Publication Publication Date Title
CN108876723B (en) Method for constructing color background of gray target image
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN110852323B (en) Angular point-based aerial target detection method
CN109911481B (en) Cabin frame target visual identification and positioning method and system for metallurgical robot plugging
CN109523583B (en) Infrared and visible light image registration method for power equipment based on feedback mechanism
CN110909750B (en) Image difference detection method and device, storage medium and terminal
CN112364865B (en) Method for detecting small moving target in complex scene
CN109087330A (en) It is a kind of based on by slightly to the moving target detecting method of smart image segmentation
CN104951765B (en) Remote Sensing Target dividing method based on shape priors and visual contrast
CN111739031A (en) Crop canopy segmentation method based on depth information
CN113298810A (en) Trace detection method combining image enhancement and depth convolution neural network
CN116229084A (en) Empty target detection method
CN112861654A (en) Famous tea picking point position information acquisition method based on machine vision
CN106599891A (en) Remote sensing image region-of-interest rapid extraction method based on scale phase spectrum saliency
CN113205494A (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN113657335A (en) Mineral phase identification method based on HSV color space
CN116228757B (en) Deep sea cage and netting detection method based on image processing algorithm
CN109978916B (en) Vibe moving target detection method based on gray level image feature matching
CN111047614A (en) Feature extraction-based method for extracting target corner of complex scene image
CN116342519A (en) Image processing method based on machine learning
CN112419265B (en) Camouflage evaluation method based on human eye vision mechanism
CN109919863B (en) Full-automatic colony counter, system and colony counting method thereof
CN108389219B (en) Weak and small target tracking loss re-detection method based on multi-peak judgment
CN113963161A (en) System and method for segmenting and identifying X-ray image based on ResNet model feature embedding UNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination