CN110532989B - Automatic detection method for offshore targets - Google Patents
Automatic detection method for offshore targets Download PDFInfo
- Publication number
- CN110532989B CN110532989B CN201910833101.XA CN201910833101A CN110532989B CN 110532989 B CN110532989 B CN 110532989B CN 201910833101 A CN201910833101 A CN 201910833101A CN 110532989 B CN110532989 B CN 110532989B
- Authority
- CN
- China
- Prior art keywords
- target
- vector
- image
- model
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Abstract
An automatic detection method for an offshore target relates to an automatic detection method for a target. The invention provides an automatic marine target detection method based on space-time analysis of a halftone image stream in a visible light range of an onboard photoelectric system of an unmanned aerial vehicle, and without presetting a hard-coded reference image for target detection. The detection method comprises the following steps: s1, acquiring an offshore target video sequence of an airborne photoelectric system of the unmanned aerial vehicle; s2, constructing a key target model M of the ocean scene O : s3, obtaining a frame target vector of a suspicious target in the first frame image of the video sequence in the S1S4, updating the key target model M O (ii) a S5, obtaining a frame target vector of a suspicious target in the next frame of image of the video sequence in the S1S6, updating the key target model M O : s7, target vector of slave modelFound weight value in W max As detected object, W max Is the maximum allowed weight of the target.
Description
Technical Field
The invention relates to an automatic target detection method, in particular to an automatic offshore target detection method, and belongs to the technical field of automatic detection.
Background
The use of machine vision techniques to achieve automatic detection of targets is one of the problems that is urgently needed to be solved in the civilian/military field. This problem has not found an explicit general solution at the present stage, but only partially under certain specific conditions. Therefore, automatic target detection techniques are well developed in radar systems and thermal imaging systems for airborne target detection.
However, objects of interest may not be extracted from the background in the infrared band range and active radar may not be used in certain tasks due to noise of the radio channel or privacy requirements of the detection. Furthermore, in many cases it is not possible to preset reference images of all objects of interest, which imposes additional limitations on the automatic object detection system.
Disclosure of Invention
In view of the above disadvantages, the present invention provides an automatic marine target detection method based on the space-time analysis of halftone image stream in the visible light range of the optoelectronic system on board the unmanned aerial vehicle, and without the need of presetting a hard-coded reference image for target detection.
The invention discloses an automatic detection method of an offshore target, which comprises the following steps:
s1, acquiring an offshore target video sequence of an airborne photoelectric system of the unmanned aerial vehicle;
s2, constructing a key target model M of the ocean scene O :
Wherein the content of the first and second substances,a target vector representing the model;representing target vectors corresponding to a modelThe weight vector of each target;
s3, obtaining a frame target vector of a suspicious target in the first frame image of the video sequence in the S1
S4, updating the key target model M O :
The frame target vector obtained in S3Input to the Key object model M O Vector of middle modelAnd combine the vectorsSetting the corresponding weight value to 1;
s5, obtaining a frame target vector of a suspicious target in the next frame image of the video sequence in the S1
S6, updating the key target model M O :
The frame target vector obtained in S5Input to the Key object model M O Vector of middle modelAnd combine the vectorsIncreasing or decreasing the corresponding weight, judging whether all frames of the offshore target video sequence are processed or not, if not, turning to S5, and if so, turning to S7;
s7, target vector of slave modelFound weight value in W max Determined as a detected object, W max Is the maximum allowed weight of the target.
Preferably, in S5, the frame object vector of the suspicious object in the frame imageThe acquisition method comprises the following steps:
s51, preprocessing, half-tone erosion and expansion are carried out on the frame image;
s52, carrying out image binarization;
s53, finding the sea horizon position;
s54, searching and dividing the independent target or the target on the horizon to obtain a frame target vector
S55, filtering and analyzing the acquired frame target vector to obtain a frame target vectorAnd determining a frame target vectorThe center of mass and eccentricity of the cylinder.
Preferably, in S52, the method for binarizing the image includes:
based on the integral image, the luminance average m (x, y) and the standard deviation σ (x, y) in the local neighborhood are calculated within a window W × W centered on the point (x, y):
m(x,y)=(I int (x+W/2,y+W/2)+I int (x-W/2,y-W/2)--I int (x-W/2,y+W/2)-I int (x+W/2,y-W/2))/W 2
I int (x, y) denotes an integral image I int The pixel intensity at the midpoint (x, y) is equal to the intensity of all pixels along the row and column before the point (x, y) in the original imageAnd (3) the sum:
i (u, v) denotes the integral image I int A middle pixel luminance;
T sq represents the sum of all pixels of the quadratic integral image in a window of size W × W centered on point (x, y):
representing a quadratic integral imageThe luminance of the pixel at the midpoint (x, y) is equal to the sum of the squares of the luminance of all pixels along the row and column before the point (x, y) in the original image:
I 2 (u, v) represents the integral image I int A medium pixel luminance;
determining a pixel contrast threshold t (x, y) in the local neighborhood from the obtained m (x, y) and σ (x, y):
wherein k represents a fine tuning parameter and takes a value in the range of [0.2,0.5], and R represents the maximum value of the standard deviation;
and determining local contrast areas under different illumination scenes according to t (x, y).
Preferably, in S52, the method for finding the offshore horizon position includes:
scanning the binary image from top to bottom in a given rotation angle range of the line relative to the horizon, and searching only the line L with blank space y,α Each line L described by the equation of a straight line f (x, y) y,α The length of which is defined as the bright pixel I of the binary image on the line bin And (3) the sum:
wherein, the line L y,α The displacement along the vertical direction relative to the coordinate origin is y, and the rotation angle is alpha;
when traversing the image sequentially in the vertical direction, the following form of weight function is constructed for all lines:
wherein k is 1 ,k 2 ,k 3 Respectively show the coefficients of the working environment of the photoelectric system, and k is taken 1 +k 2 +k 3 =1,R y,α Representing the number of broken points on the line when the length of the line is calculated;
the length of the line with the maximum weight exceeds a set threshold value L min I.e. L y,α ≥L min The horizon is considered to be detected.
Preferably, in S54, the target on the horizon is searched and divided to obtain a frame target vectorThe method comprises the following steps:
scanning along the horizon, during which the average thickness H of the horizon is calculated avg And constructing a binary object in the neighborhood of the horizonAn elevation map of (a);
the construction method of the elevation map comprises the following steps:
for all x e [0,M]M is the width of the image, from point y x+ =y 0 +h y To point y x- =y 0 -h y Go through traversal, wherein y 0 Denotes the horizon ordinate at a given x, h y Representing the height of the search area; in the scanning process from top to bottom, determining the first bright spot and the last bright spot, wherein the difference between the vertical coordinates is the height of the binary target at a given x position;
scanning and analyzing the elevation map according to the average thickness of the horizon and the elevation map of the binary target on the horizon:
if the height h at a given x i Is greater than the standard value h thr Then it can be assumed that a certain target will appear at this point on the horizon; if the target is the 1 st target detected during scanning, the standard value h thr =H avg +h k Wherein h is k Represents a minimum threshold constant, otherwise, h thr =H avg +h i-1 Wherein h is i-1 Height of the last column representing the previous target; if a starting point of the object is detected, then subsequent scans will begin to calculate the average, minimum and maximum heights of the object; height h of the target i Less than or equal to the standard value h thr The end of the target is determined and the criterion h is recalculated thr =H avg +h i-1 (ii) a If the width of the object detected in this way is greater than the minimum possible value and the aspect ratio of the object described by the rectangle R falls within a certain range set for detecting the object, the object o is set i Input to frame object vectorIn (1).
Preferably, in S6, the key object model M is updated O The method of (1), comprising:
s61, after every n frames, each targetCorresponding weight ofDecrease by 1 if the target vector of the modelIf the target is not empty and the weight is negative, thenIn deleting the object and inDeleting the target vector of the corresponding weight and modelAnd1 is reduced;
s62, frame target vectorInput to the Key target model M O Vector of (5)The method comprises the following steps:
if the target vector of the modelNull, the target vector will be at the current frameIs input to the vectorIn a vectorThe corresponding weight in (1) is set to 1;
if the target vector of the modelIs not empty, forIn thatIn the presence of a target o j ,o i And o j The euclidean distance between the centroids will be the smallest:will D min (o i ,o j ) And a set threshold value epsilon d max Comparison update of
If D is min (o i ,o j )>ε d max Target vector of frameO in (1) i Adding to vectorsThe preparation method comprises the following steps of (1) performing;
if D is min (o i ,o j )≤ε d max Evaluating frame object vectorsO in (1) i And the target vector of the modelZhong o j If the similarity meets the standard, merging the targets o i And o j Object o j All parameters of (1) are determined by purposeMark o i After parameter replacement from the vectorIn delete target o i ;
If w is j <W max Then object o j Weight w of j Increase to w j =w j +1;W max Is the maximum allowed weight of the target;
if w is j =W max Then by describing the object o j Is rectangular R j The defined halftone image is stored as a reference image for the target and added to the target vector of the modelPerforming the following steps;
Preferably, the similarity criteria are:
wherein, P S 、P R 、P e Representing a similarity criterion; s i And S j Are respectively a target o i And o j The area of (d); r is i And r j Respectively, the aspect ratio of the rectangle describing each target; e.g. of the type i And e j Respectively the target eccentricity.
Preferably, S6 further includes, if it cannot be confirmed whether the target exists in the current frame image, targeting the targetIts weight value w k =W max -1 case using the target vectors stored in the modelTarget of (1) k Automatically tracking the reference image.
The method has the advantages that based on the space-time analysis of the halftone image flow in the visible light range of the airborne photoelectric system of the unmanned aerial vehicle, two stages of time and space analysis are involved in processing each image, and in the space analysis stage, the current frame of a video sequence is processed to obtain the vector of a suspicious target. In the time analysis stage, the result of the spatial analysis is compared with a key target model of the current scene, and then the model is refined and updated without presetting a hard-coded reference image for target detection. The simulation experiment result verifies the performance and the effectiveness of the method provided by the invention.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a diagram of the results of multi-target and horizon automatic detection based on a test video sequence;
FIG. 3 is a single-target automatic detection result based on an image of an airborne optoelectronic system of the unmanned aerial vehicle;
FIG. 4 is a multi-target automatic detection result based on an image of an airborne optoelectronic system of the unmanned aerial vehicle.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The method for automatically detecting the marine target in the embodiment is to construct a key target model M of a scene based on data of the number, the position and the characteristics of the targets obtained by sequentially processing each frame of a video sequence O . The method involves two phases of temporal and spatial analysis when processing each image. In the spatial analysis stage, the current frame of the video sequence is processed to obtain the vector of the suspicious target. And in the time analysis stage, comparing the result of the spatial analysis with a key target model of the current scene, and further refining and updating the model.
As shown in fig. 1, the method for automatically detecting an offshore object according to the present embodiment includes:
s1, obtaining an offshore target video sequence of an airborne photoelectric system of the unmanned aerial vehicle:
and establishing a typical ocean scene model. The water surface and the sky are assumed to intersect at a surface with uneven illumination, and the intersection line is a straight line.
Objects of interest to the automatic detection system may be on the water surface or in the sky and their position, size, shape and illumination are approximately maintained in two adjacent frames taken over a short period of time. At the same time, various disturbing factors (glare and waves) on the sea surface can rapidly change their shape and illuminance.
S2, constructing a key target model M of the ocean scene O :
Wherein the content of the first and second substances,a target vector representing the model;representing target vectors corresponding to a modelThe weight vector of each target;
s3, obtaining a frame target vector of a suspicious target in the first frame image of the video sequence in the S1
S4, updating the key target model M O :
The frame target vector obtained in S3Input to the Key target model M O Vector of middle modelAnd combine the vectorsSetting the corresponding weight value to 1;
s5, obtaining a frame target vector of a suspicious target in the next frame image of the video sequence in the S1
S6, updating the key target model M O :
The frame target vector obtained in S5Input to the Key target model M O Vector of middle modelAnd combine the vectorsIncreasing or decreasing the corresponding weight, judging whether all frames of the offshore target video sequence are processed, if not, turning to S5, and if so, turning to S7;
s7, target vector of slave modelFound weight value in W max Determined as a detected object, W max Is the maximum allowed weight of the target. In this embodiment, vectors are processed in the first frameAndeach component of (a) is null. In the spatial analysis stage, suspicious target vectors of frames are formed(hereinafter referred to as frame object vector). In the time analysis phase, a model M is formed O From vectors to a first approximationIs inputted to the vectorIn a vectorSet the corresponding weight to 1. By each new input vector when processing subsequent frames of the video sequenceUpdating model M O . During the model update, the weight of the target may be increased/decreased. When the weight of the target reaches the set maximum value W max The object is deemed to be reliably detected.
1. Spatial analysis of image to obtain frame target vector of suspicious target in frame imageThe preferred embodiment includes:
1.1, preprocessing, half-tone erosion and expansion are carried out on a frame image;
1.2, carrying out image binarization;
1.3, searching the position of the sea horizon;
1.5, filtering and analyzing the acquired frame target vector to obtain a frame target vectorAnd determining a frame target vectorThe center of mass and eccentricity of the cylinder.
1.1 pretreatment, halftone etching and dilation:
in the present embodiment, a halftone image is regarded as a discrete luminance function in a two-dimensional space:
I=i(x,y),x∈[0,M],y∈[0,N] (2)
wherein i (x, y) is E [ K min ,K max ]Represents the image brightness at the (x, y) point; k min Represents the minimum value of the image brightness; k max Represents the maximum value of the image brightness; m and N represent the width and height of the image, respectively.
A simpler planar square structural element b is introduced, defined as follows:
where B denotes a two-dimensional area (sliding window) on the image.
The width of the square window B will be referred to as the morphologically operated aperture. At this time, the erosion I (x, y) of the planar square structural element B at each point I (x, y) is the minimum value of the image brightness of the window B determined by the planar square structural element B centered at the I (x, y) point:
[I-B](x,y)=min (bx,by)∈B {I(x+bx,y+by)} (4)
also, the present embodiment may introduce the concept of image expansion:
[I+B](x,y)=min (bx,by)∈B {I(x-bx,y-by)} (5)
in the same or different apertures, a combination of erosion and dilation operations may reduce noise interference in the image and highlight potential targets.
1.2, image binarization:
the image is binarized to separate scene object information from the image background according to some criteria. In a binary image I bin Its element can only take one of two possible values. In this embodiment, the background pixel is referred to as a dark pixel, and the binary target pixel is referred to as a bright pixel.
If the performance of the computing system is high enough, then a variety of binary images may be constructed to achieve efficient detection of the target. For example, the contour binary image is used to search the horizon, or a binary image is formed by a method based on local contrast analysis to realize the search and segmentation of the target. If the onboard optoelectronic system computing power of the unmanned aerial vehicle does not allow the construction of two binary images in real time, only the contouring method can be used to detect the object.
Contour segmentation on the preprocessed image is achieved by vertical and horizontal convolution of the image with one or two differential matrices. Convolution can yield a gradient value for each point of the image in two directions:
the matrix that performs the convolution may have different forms, for example:
the total gradient at each point is:
G(x,y)=|G x (x,y)|+|G y (x,y)| (7)
the binary contour image is:
where T denotes a threshold value determined based on the average gradient value.
T=G avg *T c (9)
Wherein, T c Representing a constant value for a given photovoltaic system calculated empirically.
The following two methods can be selected to construct a binary image: constructing a binary image based on gradient calculation, searching for a horizon, and setting a low enough binarization threshold value to reliably segment the horizon; a binary image is constructed based on local contrast analysis and used for searching a limited region with uniform contrast point brightness, and compared with a target contour method, the method can provide more valuable information. In a preferred embodiment, the second method is selected to construct a binary image in this embodiment.
Halftone image I (x, y), where I (x, y) is ∈ [0,255], i.e., the luminance of the pixel at point (x, y). The goal of local threshold classification is to determine the threshold t (x, y) for each pixel, i.e.
In the present embodiment, it is proposed to determine the threshold t (x, y) based on the mean deviation m (x, y) and the standard deviation σ (x, y) calculated within a window centered on the point (x, y):
wherein the k-fine tuning parameter is taken within the range of [0.2,0.5], and the maximum value of the R-standard deviation (R =128 for halftone images).
The threshold value can be finely adjusted according to the pixel contrast in the local neighborhood by using the brightness average value and the standard deviation in the local neighborhood, and then the local contrast area can be determined under different illumination scenes.
However, it is extremely difficult to calculate local characteristics in the vicinity of each pixel. Without any improvement, for an image of resolution M × N, a square window of size W × W, the method approximates to a computational complexity of O (W) 2 MN). In order to speed up the calculation of local features, the present embodiment proposes a method based on integral image calculation.
For integral image I int In other words, the pixel luminance at point (x, y) is equal to the sum of the luminance of all pixels of the original image along the rows and columns before point (x, y):
under the condition that the integral image is obtained, the local luminance average value in the vicinity of an arbitrary point can be calculated by only several arithmetic operations:
also, the standard deviation is:
after finishing, obtaining:
to calculate the quadratic total luminance in the window, a second integral image needs to be constructed in which the luminance of the pixel at point (x, y) is equal to the sum of the squares of the luminances of all pixels of the original image along the rows and columns before point (x, y):
by T sq Represents the sum of all pixels of the quadratic integral image in a window of size W × W centered on point (x, y):
at this time, the following were obtained:
after finishing, obtaining:
therefore, calculating the threshold t (x, y) according to equation (11) can be translated into the calculation of the normalized sum of the two integral images.
Integrating the image I while processing a frame int Andis only calculated once, so the use of an integral image can significantly reduce the computational complexity of the method of constructing a binary image based on locally adaptive thresholds. The binarization method provided by the embodiment has a better effect in actual target detection than a gradient-based binarization contour method.
1.3 finding the position of the horizon:
with the parameters of the horizon, the object on the horizon can be separated from the background, and even the type of the detected object can be inferred according to the position of the object relative to the horizon.
Currently, there are some good algorithms for finding lines on an image, for example, those based on the hough transform. However, for object detection problems in complex sea states, the number of lines segmented according to such an algorithm may be very large, and the longest of these lines is not necessarily the desired horizon. This will result in longer bright areas on the binary image due to the waves and undulations creating fairly large uniform high and low brightness areas.
For the above reasons, the present embodiment proposes a new line search algorithm that scans a binary image from top to bottom within a given rotation angle range of the line with respect to the horizon (d e [ - α, + α ]), searching only those lines in which a blank space (dark pixels on the binary image) exists in some region.
For each line L described by the equation of a straight line f (x, y) y,α (displacement in the vertical direction with respect to the origin of coordinates is y, rotation angle is α), the length of which is defined as the sum of the bright pixels of the binary image on the line:
when traversing the image sequentially in the vertical direction, the following form of weight function is constructed for all lines:
wherein k is 1 ,k 2 ,k 3 Taking k as a function of the coefficients of the environment in which the photovoltaic system operates (whether equipped with a gyrostabiliser, sea wave intensity, visibility conditions, etc.) 1 +k 2 +k 3 =1;R y,α -calculating the number of break points on the line at the length of the line.
Thus, the line with the longest length and the least number of break points will have the greatest weight and be closest to the horizon. The straight line with the greatest weight is taken as the horizon. If the length of the line with the greatest weight exceeds a set threshold, i.e., L y,α ≥L min The horizon is considered to be detected.
1.4 searching and segmenting of objects
In this step, the present embodiment introduces the concept of a binary target. Binary objectIs a binary image I bin The total number (area) of bright pixels in the pixel group is S, by means of the rectangle R and the target feature vector (center of mass, eccentricity, etc.)To describe.
All binary objects detected on the current frame are input to the frame object vectorIn (1). At the initial moment of analyzing the frame, the frame target vectorIs empty.
In the present embodiment, the following two types of target search problems on a binary image are mainly studied. If the horizon is detected on the image of the previous step, the objects on the horizon are first analyzed. Otherwise, searching and dividing the independent target.
1.4.1 target search and segmentation on horizon:
in general, the contour of the potential object blends with the horizon and appears as a long binary object in the binary image. To determine whether an object is present on the horizon, a scan is performed along the horizon y = kx + b, where the parameters k and b were determined in the previous step. During scanning, the average thickness H of the horizon is calculated avg And constructing an elevation map of the binary target in a certain neighborhood of the horizon.
The elevation map is constructed as follows. For all x e [0,M](M is the width of the image), from point y x+ =y 0 +h y To point y x- =y 0 -h y Go through traversal, wherein y 0 -given the horizon ordinate at x,h y -the height of the search area. During the top-down scan, the 1 st and last bright spot are determined, and the difference between their ordinate is the height of the binary target at a given x. Thus, if an object is present in a certain neighborhood, the constructed elevation map will have a defined form, e.g., h = [ K,3, 4,6,8,13, 14,12,8,6,5, 3, K = [ K,3, 4,6, 5, 8,13, 14,12, 3, K ] K]At this time, the average thickness of the horizon will be H avg =3。
After the average thickness of the horizon and the elevation map of the binary target on the horizon are obtained, the elevation map can be scanned and analyzed in detail. If the elevation deviation at a given x is larger than a certain criterion value h i >h thr Then it can be assumed that a certain target will be present at this point on the horizon. If the target is the 1 st target detected during scanning, the standard value h thr =H avg +h k Wherein h is k -a minimum threshold constant. Otherwise, h thr =H avg +h i-1 Wherein h is i-1 Height of the last column (right side when scanning from left to right) of the previous target. If a starting point of the object is detected, then subsequent scans will begin to calculate the average, minimum and maximum heights of the object. Once the height of the target is less than the threshold h i ≤h thr The end of the target is determined and the threshold h is recalculated thr =H avg +h i-1 . If the width of the object detected in this way is greater than the minimum possible value and the aspect ratio of the object described by the rectangle R falls within a certain range set for detecting the object (i.e., the object cannot be too long in the vertical or horizontal direction), the object is o i Input to frame object vectorIn (1).
Furthermore, the target may be limited by other conditions, for example, the degree of unevenness in the target height must be greater than a certain threshold to reduce the effects of waves, glare and coastal infrastructure on the horizon.
1.4.2 search and segmentation of independent targets
In addition to considering objects on the horizon and the case where the horizon position cannot be determined, the more general case of object search and segmentation must also be considered.
For labeling and segmentation of related objects on a binary image, a recursive filling algorithm, a one-pass mask algorithm, etc. may be used. The computation speed of the marking algorithm may vary greatly but it works essentially the same. The image is traversed in rows and columns. If an unmarked pixel is detected at a certain point (x, y), the pixel is marked and all neighboring pixels are traversed and marked according to the criterion of 4-connectivity or 8-connectivity. In the marking process, the area of a binary target rectangle description area is determined, and then threshold value checking is carried out:
wherein R is X -a width of a rectangle describing the object; r Y -a height of a rectangle describing the object; x min And Y min -minimum width and height of the target, respectively; s min And S max -a minimum and a maximum of the binary target area, respectively; r is min And r max Minimum and maximum values, respectively, of the aspect ratio of the rectangle describing the object.
The threshold is adjusted according to the parameters of the photovoltaic system and the expected size of the target, and most of small noises, waves and the like can be filtered. If the target is binary o i (S, R) satisfies the selected criterion, and is input to the frame target vectorIn (1).
1.5 Filtering and analysis of target vectors
In this step, the target vectors are obtained for the previously obtained framesFor further analysis. Contour of objects in binary imagesOften can be decomposed into contours of different sub-objects, which will result in a frame object vectorOf (2) is redundant.
To eliminate this redundancy, the frame object vector is subjected toTraversal is performed and all objects stored therein are compared pairwise. For each pair of targets o i And o j The following conditions were checked:
a) Describing object o i Should at least include a description object o j And vice versa;
b) Describing object o i And o j The euclidean distance between the centers of the two rectangles being smaller than a certain threshold value D < epsilon. In this case, the value of ε may be determined, for example, based on the diagonal length of the largest rectangle.
If at least one condition is satisfied, the targets are merged, their areas and the described rectangles are added and written to target o i Simultaneous slave vectorIn delete target o j 。
After the merging process of each target is completed, a new target vector described using a rectangle R is generatedCentral moment mu pq And calculating to the second order, and determining the coordinates of the mass center of the target, the direction and the eccentricity of the target on the basis of the coordinates.
In picture I bin Initial moment of the target in (x, y):
center of mass of the target:
wherein the content of the first and second substances,
in picture I bin (x, y) the central moments of the targets are:
eccentricity can be determined by the image I in the region of (x, y) ∈ R bin The eigenvalues of the (x, y) covariance matrix determine:
the algorithm executes to this end and the image space analysis phase ends, the image being analyzed temporally as follows.
2. Temporal analysis of images updating the Key object model M O
The image is analyzed temporally to construct a model at an initial stageAnd further perfecting it. Updating a Key target model M O The method comprises the following steps: :
2.3. If the presence of an object cannot be reliably confirmed in the current frame, the vector is paired by a specific conditionAutomatically track the target in (1).
2.1 "subtractive" procedure
If the target o is reduced i Becomes negative (i.e., w) i < 0), then the slave vectorIn delete target o i And from the vector of valuesDeleting corresponding weight w i Vector ofAndis reduced by 1. Thus, over time, the weights of all targets will fade away, and if the weights of the targets are small enough, a decision is made to drop the targets. The choice of the "cut-off" frequency depends on the frame rate f of the photovoltaic system, determined by the following equation:
wherein k is f Constant, proportional to the frame rate of the photovoltaic system.
In the present embodiment, the onboard photovoltaic system frame rate f =15,k f The value of (c) is half of the frame frequency of the photovoltaic system: k is a radical of formula f =7.5, then n =1/3. At this value of n, every third frame weight is "subtracted" 1 time, which allows the algorithm to adapt to situations where the interval between input frames is large.
2.2 model M O Is updated
If the target vector of the modelNull, then all objects detected on the current frame are slave vectorsIs input to the vectorAnd add it to the vectorThe corresponding weight in (1) is set to 1.
If vectorAlready containing models of certain objects, then try to fit each objectAdding to vectorsIn (1). For theIn the vectorIn which such an object o is present j Object o i And o j The euclidean distance between centroids will be the smallest:if the distance is greater than a certain maximum value D min >ε d max Then the target o i Adding to vectorsIn the vectorTo create a weight value w corresponding thereto i And =1. If D is min (o i ,o j )≤ε d max Then, the target o is evaluated i And o j The similarity of (2);
merging the targets o if the comparison targets meet all similarity criteria i And o j . Object o j By the target o i Is updated (replaced) after which the parameters are updated from the vectorIn delete target o i . If w is j <W max Then, corresponds to the target o j Weight value w of j Increase to w j =w j +1 wherein, W max The maximum allowable weight of the target is obtained, and compromise is made between the target detection speed and the false alarm rate; w max Must be proportional to or equal to the frame rate of the photovoltaic system.
If the weight of the target reaches the maximum value allowed, the target o is described j Is rectangular R j The part of the defined halftone image is stored as a reference image for the target and added to the vectors of its modelIn, the weight reaches the maximum value w j =W max Target o of j Is considered a reliable detection.
The similarity criteria of the present embodiment are:
wherein, P S 、P R 、P e Representing a similarity criterion; s i And S j Are respectively a target o i And o j The area of (d); r is i And r j Respectively, the aspect ratio of the rectangle describing each target; e.g. of the type i And e j Respectively the target eccentricity.
Similarity criterion P for a given photovoltaic system S ,P R ,P e Are empirically chosen based on the expected operating conditions of the system.
2.3 auto-tracking
For all reliably detected objects(i.e., those model objects of the scene whose weights reach the maximum allowed value at least once), whose weights are not updated in 2.2 steps, at w k =W max In the case of-1, the storage in the vector is usedThe standard halftone image of the target model in (a) to perform an auto-tracking procedure.
In this case, any automatic tracking algorithm may be used. The integral image I is constructed in the binarization process of the present embodiment int It is convenient to use it in this step to speed up the calculation of the cross-correlation. If the target is found by the correlator, i.e. the maximum correlation coefficient in the search area is greater than a certain threshold c max ≥C min Then object o k Weight w of k Increases and becomes maximum:
w k =(W max -1)+1=W max 。
3 analysis and decision
This step is the last step. Target vector from modelOf those weights equal to the maximum value w det =W max Target o of det . These objects are stored in a result vectorThis is the result of the operation of the target automatic detection method of the present embodiment.
4. Simulation experiment
Most of the steps of the method proposed by the invention can be implemented in multiple threads. Therefore, the method is implemented by the massively parallel technology CUDA, which allows real-time processing of test images with a resolution of 1920 × 1080 at speeds exceeding 25 frames/sec.
Configuration parameters of the simulation computer:
a processor: intel core i 5-4440.1Ghz;
a display card: geForce GTX 750Ti (640 Cuda cores) 2GB;
the working result of a multi-threaded implementation of the method using a test video sequence is given in fig. 2. The 4 targets are automatically detected and the horizon position is determined on the image.
The automatic detection system for the offshore targets, which is realized by the method provided by the invention, is an important component of an onboard photoelectric system of the unmanned aerial vehicle, and the image display performance is as high as 100 frames/second under the resolution of 768 x 576.
The results of the object automatic detection method of the present invention are given on fig. 3 and 4.
And (4) conclusion: the simulation experiment result verifies the performance and the effectiveness of the method provided by the invention.
Claims (7)
1. An automatic offshore object detection method, comprising:
s1, acquiring an offshore target video sequence of an airborne photoelectric system of the unmanned aerial vehicle;
s2, constructing a key target model M of the ocean scene O :
Wherein, the first and the second end of the pipe are connected with each other,a target vector representing the model;representing target vectors corresponding to a modelThe weight vector of each target;
s3, obtaining a frame target vector of a suspicious target in the first frame image of the video sequence in the S1
S4, updating the key target model M O :
The frame object vector obtained in S3Measurement ofInput to the Key target model M O Vector of middle modelAnd combine the vectorsSetting the corresponding weight value to 1;
s5, obtaining a frame target vector of a suspicious target in the next frame image of the video sequence in the S1
S6, updating the key target model M O :
The frame target vector obtained in S5 is processedInput to the Key target model M O Vector of middle modelAnd combine the vectorsIncreasing or decreasing the corresponding weight, judging whether all frames of the offshore target video sequence are processed, if not, turning to S5, and if so, turning to S7;
s7, target vector of slave modelFound weight value in W max Determined as a detected object, W max Is the maximum allowed weight of the target;
wherein S6 updates the key target model M O The method comprises the following steps:
s61, after every n frames, each targetCorresponding weight ofDecrease by 1 if the target vector of the modelIf the target is not empty and the weight is negative, thenIn deleting the object and inDeleting the target vector of the corresponding weight and modelAnd1 is subtracted from the size of the target;
s62, frame target vectorInput to the Key target model M O Vector of (5)The method comprises the following steps:
if the target vector of the modelNull, the target vector will be at the current frameInput to vectorMeasurement ofAnd add it to the vectorThe corresponding weight in (1) is set to 1;
if the target vector of the modelIs not empty, forIn thatIn which there is an object o j ,o i And o j The euclidean distance between the centroids will be the smallest:will D min (o i ,o j ) And a set threshold value epsilon dmax Comparison update of
If D is min (o i ,o j )>ε dmax Target vector of frameO in (1) i Adding to vectorsPerforming the following steps;
if D is min (o i ,o j )≤ε dmax Evaluating the frame target vectorO in (1) i And the target vector of the modelMiddle o j If the similarity meets the criterion, merging the target o i And o j Object o j By the target o i After parameter replacement from the vectorIn delete target o i ;
If w is j <W max Then object o j Weight value w of j Increase to w j =w j +1;W max Is the maximum allowed weight of the target;
if w is j =W max Then by describing the object o j Is rectangular R j The defined halftone image is stored as a reference image for the target and added to the target vector of the modelPerforming the following steps;
2. The method according to claim 1, wherein in S5, the frame object vector of the suspicious object in the frame imageThe acquisition method comprises the following steps:
s51, preprocessing, half-tone erosion and expansion are carried out on the frame image;
s52, carrying out image binarization;
s53, finding the sea horizon position;
s54, searching and dividing the independent target or the target on the horizon to obtain a frame target vector
3. The offshore object automatic detection method according to claim 2, wherein in S52, the image binarization method comprises:
based on the integral image, the luminance average m (x, y) and the standard deviation σ (x, y) in the local neighborhood are calculated within a window W × W centered on the point (x, y):
m(x,y)=(I int (x+W/2,y+W/2)+I int (x-W/2,y-W/2)--I int (x-W/2,y+W/2)-I int (x+W/2,y-W/2))/W 2
I int (x, y) denotes an integral image I int The pixel intensity at the midpoint (x, y), is equal to the sum of the intensities of all pixels along the rows and columns of the original image before the point (x, y):
i (u, v) denotes an integral image I int A medium pixel luminance;
T sq represents the sum of all pixels of the quadratic integral image in a window of size W × W centered on point (x, y):
representing a quadratic integral imageThe luminance of the pixel at the midpoint (x, y) is equal to the sum of the squares of the luminance of all pixels along the row and column before the point (x, y) in the original image:
I 2 (u, v) represents the integral image I int A middle pixel luminance;
determining a pixel contrast threshold t (x, y) in the local neighborhood from the obtained m (x, y) and σ (x, y):
wherein k represents a fine tuning parameter and takes a value in the range of [0.2,0.5], and R represents the maximum value of the standard deviation;
and determining local contrast areas under different illumination scenes according to t (x, y).
4. The offshore object automatic detection method according to claim 2, wherein in S52, the method for finding the offshore horizon position is as follows:
scanning the binary image from top to bottom in a given rotation angle range of the line relative to the horizon, and searching only the lines with blank spaceStrip L y,α Each line L described by the equation of a straight line f (x, y) y,α The length of which is defined as the bright pixel I of the binary image on the line bin And (3) the sum:
wherein, the line L y,α The displacement along the vertical direction relative to the coordinate origin is y, and the rotation angle is alpha;
when traversing the image sequentially in the vertical direction, the following form of weight function is constructed for all lines:
wherein k is 1 ,k 2 ,k 3 Respectively showing coefficients of the working environment of the photoelectric system, and taking k 1 +k 2 +k 3 =1,R y,α Representing the number of broken points on the line when the length of the line is calculated;
the length of the line with the maximum weight exceeds a set threshold value L min I.e. L y,α ≥L min The horizon is considered to be detected.
5. The offshore object automatic detection method according to claim 2, wherein in S54, the objects on the horizon are searched and segmented to obtain frame object vectorsThe method comprises the following steps:
scanning along the horizon, and calculating the average thickness H of the horizon during scanning avg Constructing an elevation map of the binary target in the neighborhood of the horizon;
the construction method of the elevation map comprises the following steps:
for all x e [0,M]M is the width of the image, from point y x+ =y 0 +h y To point y x- =y 0 -h y Go through traversal, wherein y 0 Denotes the horizon ordinate, h, at a given x y Representing the height of the search area; in the scanning process from top to bottom, determining the first bright spot and the last bright spot, wherein the difference between the vertical coordinates is the height of the binary target at a given x position;
scanning and analyzing the elevation map according to the average thickness of the horizon and the elevation map of the binary target on the horizon:
if the height h at a given x i Is greater than the standard value h thr Then it can be assumed that a certain target will appear at this point on the horizon; if the target is the 1 st target detected during scanning, the standard value h thr =H avg +h k Wherein h is k Represents a minimum threshold constant, otherwise, h thr =H avg +h i-1 Wherein h is i-1 Height of the last column representing the previous target; if a starting point of the object is detected, then subsequent scans will begin to calculate the average, minimum and maximum heights of the object; once height h of the target i Less than or equal to the standard value h thr The end of the target is determined and the standard value h is recalculated thr =H avg +h i-1 (ii) a If the width of the object detected in this way is greater than the minimum possible value and the aspect ratio of the object described by the rectangle R falls within a certain range set for detecting the object, the object o is set i Input to frame object vectorIn (1).
6. The offshore object automatic detection method according to claim 1, wherein the similarity criterion is:
wherein, P S 、P R 、P e Representing a similarity criterion; s i And S j Are respectively a target o i And o j The area of (d); r is i And r j Respectively, the aspect ratio of the rectangle describing each target; e.g. of the type i And e j Respectively the target eccentricity.
7. The offshore object automatic detection method according to claim 6, wherein S6 further comprises, if the existence of the object cannot be confirmed in the current frame image, identifying the objectIts weight value w k =W max -1 case, using the target vectors stored in the modelTarget of (1) k Automatically tracking the reference image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910833101.XA CN110532989B (en) | 2019-09-04 | 2019-09-04 | Automatic detection method for offshore targets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910833101.XA CN110532989B (en) | 2019-09-04 | 2019-09-04 | Automatic detection method for offshore targets |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110532989A CN110532989A (en) | 2019-12-03 |
CN110532989B true CN110532989B (en) | 2022-10-14 |
Family
ID=68666838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910833101.XA Active CN110532989B (en) | 2019-09-04 | 2019-09-04 | Automatic detection method for offshore targets |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110532989B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
CN102494675A (en) * | 2011-11-30 | 2012-06-13 | 哈尔滨工业大学 | High-speed visual capturing method of moving target features |
CN105069429A (en) * | 2015-07-29 | 2015-11-18 | 中国科学技术大学先进技术研究院 | People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform |
US9390506B1 (en) * | 2015-05-07 | 2016-07-12 | Aricent Holdings Luxembourg S.A.R.L. | Selective object filtering and tracking |
WO2016131300A1 (en) * | 2015-07-22 | 2016-08-25 | 中兴通讯股份有限公司 | Adaptive cross-camera cross-target tracking method and system |
CN107315095A (en) * | 2017-06-19 | 2017-11-03 | 哈尔滨工业大学 | Many vehicle automatic speed-measuring methods with illumination adaptability based on Video processing |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
US10282852B1 (en) * | 2018-07-16 | 2019-05-07 | Accel Robotics Corporation | Autonomous store tracking system |
CN109934131A (en) * | 2019-02-28 | 2019-06-25 | 南京航空航天大学 | A kind of small target detecting method based on unmanned plane |
CN109977895A (en) * | 2019-04-02 | 2019-07-05 | 重庆理工大学 | A kind of wild animal video object detection method based on multi-characteristic fusion |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9092841B2 (en) * | 2004-06-09 | 2015-07-28 | Cognex Technology And Investment Llc | Method and apparatus for visual detection and inspection of objects |
US7447337B2 (en) * | 2004-10-25 | 2008-11-04 | Hewlett-Packard Development Company, L.P. | Video content understanding through real time video motion analysis |
US7801330B2 (en) * | 2005-06-24 | 2010-09-21 | Objectvideo, Inc. | Target detection and tracking from video streams |
-
2019
- 2019-09-04 CN CN201910833101.XA patent/CN110532989B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
CN102494675A (en) * | 2011-11-30 | 2012-06-13 | 哈尔滨工业大学 | High-speed visual capturing method of moving target features |
US9390506B1 (en) * | 2015-05-07 | 2016-07-12 | Aricent Holdings Luxembourg S.A.R.L. | Selective object filtering and tracking |
WO2016131300A1 (en) * | 2015-07-22 | 2016-08-25 | 中兴通讯股份有限公司 | Adaptive cross-camera cross-target tracking method and system |
CN105069429A (en) * | 2015-07-29 | 2015-11-18 | 中国科学技术大学先进技术研究院 | People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform |
CN107315095A (en) * | 2017-06-19 | 2017-11-03 | 哈尔滨工业大学 | Many vehicle automatic speed-measuring methods with illumination adaptability based on Video processing |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
US10282852B1 (en) * | 2018-07-16 | 2019-05-07 | Accel Robotics Corporation | Autonomous store tracking system |
CN109934131A (en) * | 2019-02-28 | 2019-06-25 | 南京航空航天大学 | A kind of small target detecting method based on unmanned plane |
CN109977895A (en) * | 2019-04-02 | 2019-07-05 | 重庆理工大学 | A kind of wild animal video object detection method based on multi-characteristic fusion |
Non-Patent Citations (2)
Title |
---|
Efficient method for detecting and tracking moving objects in video;Nilesh J. Uke,等;《IEEE》;20161231;第343-348页 * |
低轨无拖曳卫星的自适应神经网络控制器设计;李季,等;《计算技术与自动化》;20140630;第1-6页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110532989A (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7373012B2 (en) | Detecting moving objects in videos with corner-based background model | |
US8094936B2 (en) | Method and apparatus to segment motion area in real-time to detect motion in surveillance camera system | |
Milgram | Region extraction using convergent evidence | |
US7430303B2 (en) | Target detection method and system | |
JP2006209755A (en) | Method for tracing moving object inside frame sequence acquired from scene | |
Kyo et al. | A robust vehicle detecting and tracking system for wet weather conditions using the IMAP-VISION image processing board | |
CN108537816A (en) | A kind of obvious object dividing method connecting priori with background based on super-pixel | |
CN111027497A (en) | Weak and small target rapid detection method based on high-resolution optical remote sensing image | |
Tiwari et al. | A survey on shadow detection and removal in images and video sequences | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
Mo et al. | Sea-sky line detection in the infrared image based on the vertical grayscale distribution feature | |
CN107977608B (en) | Method for extracting road area of highway video image | |
CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
Hommos et al. | Hd Qatari ANPR system | |
CN110532989B (en) | Automatic detection method for offshore targets | |
CN109785318B (en) | Remote sensing image change detection method based on facial line primitive association constraint | |
Zhang et al. | Infrared small dim target detection based on region proposal | |
CN115512310A (en) | Vehicle type recognition method and system based on face features under video monitoring | |
CN113963178A (en) | Method, device, equipment and medium for detecting infrared dim and small target under ground-air background | |
Abdulla et al. | Triple-feature-based particle filter algorithm used in vehicle tracking applications | |
Aqel et al. | Traffic video surveillance: Background modeling and shadow elimination | |
CN112967305B (en) | Image cloud background detection method under complex sky scene | |
Yin et al. | Flue gas layer feature segmentation based on multi-channel pixel adaptive | |
Pojage et al. | Review on automatic fast moving object detection in video of surveillance system | |
CN113159157B (en) | Improved low-frequency UWB SAR leaf cluster hidden target fusion change detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |