CN113065454B - High-altitude parabolic target identification and comparison method and device - Google Patents

High-altitude parabolic target identification and comparison method and device Download PDF

Info

Publication number
CN113065454B
CN113065454B CN202110338987.8A CN202110338987A CN113065454B CN 113065454 B CN113065454 B CN 113065454B CN 202110338987 A CN202110338987 A CN 202110338987A CN 113065454 B CN113065454 B CN 113065454B
Authority
CN
China
Prior art keywords
parabolic
target
image
area
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110338987.8A
Other languages
Chinese (zh)
Other versions
CN113065454A (en
Inventor
谢宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Smart Life Technology Co Ltd
Original Assignee
Qingdao Hisense Smart Life Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Smart Life Technology Co Ltd filed Critical Qingdao Hisense Smart Life Technology Co Ltd
Priority to CN202110338987.8A priority Critical patent/CN113065454B/en
Publication of CN113065454A publication Critical patent/CN113065454A/en
Application granted granted Critical
Publication of CN113065454B publication Critical patent/CN113065454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The invention discloses a method and a device for identifying and comparing high-altitude parabolic targets, wherein the method comprises the steps of obtaining multi-frame images of a video to be identified, identifying two continuous frame images in the multi-frame images to obtain target points, removing noise points in the two continuous frame images by using a single-classification noise reduction model, removing non-parabolic objects in the two continuous frame images by using a single-classification non-parabolic model, and performing area comparison and color comparison on the targets in the two continuous frame images from which the noise points and the non-parabolic objects are removed to determine the parabolic targets in the two continuous frame images. Because the single classification noise reduction model and the single classification non-parabolic model are used for removing noise points and non-parabolic objects in the image, the interference given in the image can be reduced, and the accuracy of parabolic target identification is improved.

Description

High-altitude parabolic target identification and comparison method and device
Technical Field
The invention relates to the technical field of high-altitude parabolas, in particular to a method and a device for identifying and comparing high-altitude parabolas.
Background
High altitude parabolas are generally sporadic events, and in video detection, a great amount of detected objects are noise (such as noise of a camera and noise caused by light and shadow changes) and non-parabolic objects (such as flying birds, flying insects, fallen leaves and the like. Thus, in high altitude parabola detection scenarios, identification and validation of parabolas is critical to the problem. If the problem cannot be solved, a large number of false detections occur, and noise points and non-parabolic objects are identified as parabolas, so that false alarms are caused.
General processing methods of image processing include expansion, erosion, noise reduction, and the like. These processing methods are also mentioned in numerous patents for high altitude parabolic detection. However, for high altitude parabolic detection, the parabolic target is small in the picture when the parabolic target is located on a high floor (as shown in fig. 1). In this case, the erosion or noise reduction process is performed, which causes the parabola to disappear from the picture.
In addition, even if the above-described image processing method is used, a large noise point (for example, due to a change in light and shadow, fig. 2) cannot be removed. The probability of the continuous occurrence of such noise points is very small, but still larger than the probability of the occurrence of a true parabola, and if not processed, tens or even tens of false alarms occur every day.
For non-parabolic interferences such as birds and insects, the method using the above-mentioned general image processing is difficult to solve.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying and comparing high-altitude parabolic targets, which solve the problem that noise and non-parabolic interference cannot be removed in image processing in the prior art.
In a first aspect, an embodiment of the present invention provides a method for identifying and comparing high-altitude parabolic targets, including:
acquiring multi-frame images of a video to be identified, and identifying two continuous frames of images in the multi-frame images to obtain a target point;
removing noise points in the two continuous frames of images by using a single classification noise reduction model;
removing the non-parabolic object in the two continuous frames of images by using a single-classification non-parabolic model;
and carrying out area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining the parabolic targets in the two continuous frames of images.
In the technical scheme, the single-classification noise reduction model and the single-classification non-parabolic model are used for removing noise points and non-parabolic objects in the image, so that the interference given in the image can be reduced, and the accuracy of parabolic target identification is improved.
Optionally, the identifying two consecutive images of the multiple frames of images to obtain the target point includes:
carrying out graying processing on the multi-frame image;
and performing difference calculation on two continuous frames of images in the grayed multi-frame images to determine a target point.
Optionally, the single classification noise reduction model is determined according to the following steps:
acquiring a training sample marked with a noise point;
extracting features of each frame of image in the training sample marked with the noise points;
and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining the single classification noise reduction model.
Optionally, determining the single-classification non-parabolic model according to the following steps includes:
acquiring a training sample marked with a non-parabolic object;
extracting features from each frame of image in the training sample marked with the non-parabolic object;
inputting the extracted features into a preset single classification model for training, and obtaining the single-classification non-parabolic model until a preset training target is met.
Optionally, the extracting features for each frame of image includes:
determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domains;
determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image;
determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space;
determining the area of each noise point or non-parabolic object, the ratio of the area of each noise point or non-parabolic object to the perimeter and the hue difference of each noise point or non-parabolic object and the background in each frame image as the characteristics of each frame image.
Optionally, the area comparing the noise-removed point with the target in two consecutive frames of images of the non-parabolic object includes:
for the two continuous frames of images, if the area of the target in the previous frame of image is smaller than or equal to the area threshold, determining that the target in the previous frame of image and the target in the next frame of image are continuous parabolic targets when the ratio of the area of the target in the next frame of image to the area of the target in the previous frame of image is determined to be within a first area range;
if the area of the target in the previous frame image is larger than the area threshold value, when the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is determined to be in a second area range, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets;
the area threshold, the first area range, and the second area range are determined from parabolic experimental data.
Optionally, the color comparing the target in the two consecutive frames of images without the noise point and the non-parabolic object includes:
extracting the outline of the target by the frame difference of the gray level images aiming at the two continuous frames of images;
determining the contour of the target in a color map from the color map corresponding to the gray map according to the contour of the target in the gray map;
and calculating the picture similarity of the outlines of the targets in the two continuous frames of images, and if the picture similarity meets a similarity threshold, determining that the targets in the two continuous frames of images are continuous parabolic targets.
In a second aspect, an embodiment of the present invention provides an apparatus for identifying and comparing high-altitude parabolic targets, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring multi-frame images of a video to be identified;
the processing unit is used for identifying two continuous frames of images in the multi-frame images to obtain a target point; removing noise points in the two continuous frames of images by using a single classification noise reduction model; removing non-parabolic objects in the two continuous frames of images by using a single-classification non-parabolic model; and carrying out area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining the parabolic targets in the two continuous frames of images.
Optionally, the processing unit is specifically configured to:
carrying out graying processing on the multi-frame image;
and performing difference calculation on two continuous frames of images in the grayed multi-frame images to determine a target point.
Optionally, the processing unit is specifically configured to:
determining the single classification noise reduction model according to the following steps:
acquiring a training sample marked with a noise point;
extracting features of each frame of image in the training sample marked with the noise points;
and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining the single classification noise reduction model.
Optionally, the processing unit is specifically configured to:
determining the single classification non-parabolic model according to the following steps comprising:
acquiring a training sample marked with a non-parabolic object;
extracting features from each frame of image in the training sample marked with the non-parabolic object;
and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining the single classification non-parabolic model.
Optionally, the processing unit is specifically configured to:
determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domain;
determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image;
determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space;
determining the area of each noise point or non-parabolic object, the ratio of the area of each noise point or non-parabolic object to the perimeter and the hue difference of each noise point or non-parabolic object and the background in each frame image as the characteristics of each frame image.
Optionally, the processing unit is specifically configured to:
for the two continuous frames of images, if the area of the target in the previous frame of image is smaller than or equal to the area threshold, determining that the target in the previous frame of image and the target in the next frame of image are continuous parabolic targets when the ratio of the area of the target in the next frame of image to the area of the target in the previous frame of image is determined to be within a first area range;
if the area of the target in the previous frame image is larger than the area threshold, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets when the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is determined to be in a second area range;
the area threshold, the first area range, and the second area range are determined from parabolic experimental data.
Optionally, the processing unit is specifically configured to:
extracting the outline of the target by the frame difference of the gray level images aiming at the two continuous frames of images;
determining the outline of the target in the color map from the color map corresponding to the gray map according to the outline of the target in the gray map;
and calculating the picture similarity of the outlines of the targets in the two continuous frames of images, and if the picture similarity meets a similarity threshold, determining that the targets in the two continuous frames of images are continuous parabolic targets.
In a third aspect, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the high-altitude parabolic target identification and comparison method according to the obtained program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is caused to execute the above-mentioned method for identifying and comparing high-altitude parabolic targets.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings may be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a parabolic target according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a noise point according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a system architecture according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a method for identifying and comparing high altitude parabolic targets according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating a method for identifying and comparing high altitude parabolic targets according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a single classification noise reduction model training according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of a single classification non-parabolic model training process according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart of a continuous target area comparison according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a process for continuous color distribution comparison of an object according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an apparatus for identifying and comparing high-altitude parabolic targets according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 3 is a system architecture according to an embodiment of the present invention. As shown in fig. 3, the system architecture may be a server 100, and the server 100 may include a processor 110, a communication interface 120, and a memory 130.
The communication interface 120 is used for communicating with other terminal devices, and transceiving information transmitted by the other terminal devices to implement communication.
The processor 110 is a control center of the server 100, connects various parts of the entire server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by running or executing software programs and/or modules stored in the memory 130 and calling data stored in the memory 130. Alternatively, processor 110 may include one or more processing units.
The memory 130 may be used to store software programs and modules, and the processor 110 executes various functional applications and data processing by operating the software programs and modules stored in the memory 130. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to a business process, and the like. Further, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
It should be noted that the structure shown in fig. 3 is only an example, and the embodiment of the present invention is not limited thereto.
Based on the above description, fig. 4 shows in detail a flow of a method for high-altitude parabolic target recognition and comparison according to an embodiment of the present invention, where the flow may be executed by an apparatus for high-altitude parabolic target recognition and comparison.
As shown in fig. 4, the process specifically includes:
step 401, acquiring a multi-frame image of a video to be identified, and identifying two continuous frames of images in the multi-frame image to obtain a target point.
In the embodiment of the invention, the monitoring video can be obtained, and each frame image of the video is obtained. When the target point is identified, the multi-frame image is subjected to graying processing, and then the difference value of two continuous frames of images in the grayed multi-frame image is calculated to determine the target point.
And 402, removing noise points in the two continuous frames of images by using a single classification noise reduction model.
In embodiments of the present invention, noise is common in high altitude parabolic video, which is often not noticeable if the video is viewed by the naked eye, but many noise points are easily found if frame differences are made, especially when there is a change in the lighting and shadows. In contrast, the parabola is an accidental event, a video of the actually occurring parabola event cannot easily collect a large amount of data, and the data collection by artificially and intentionally parabolic event is difficult to really cover most kinds of parabolas. For this case, a single classification method can be used to learn the characteristics of the noise, even if it is not known what the parabola is, at least what the noise is. This allows the noise to be removed.
Single classification methods that may be used include isolated forests, single class support vector machines (one-class-SVMs), and the like. And collecting high-altitude parabolic prevention monitoring videos under various conditions, and extracting noise point data by a frame difference method. Model training is performed using the area size, shape, color, and the like of the noise point as its features.
The area of the noise point has various methods, such as calculating the area by using a circumscribed rectangle of the noise point. In order to distinguish noise from a true parabola, the number of pixels covered by a noise point connected domain is used for measurement. The shape feature is characterized by the ratio of area to perimeter. The shape of the noise point is generally irregular, and hollowness (such as noise caused by camera shaking) sometimes occurs; these two features are therefore good features for defining noise points.
The noise caused by the change in shade generally fluctuates in brightness without a change in color. That is, in the HSV color space (hue (H), saturation (S), and lightness (V)), the hue of a noise point is close to the hue of the background. Thus, in principle, noise can be eliminated by comparing the detected hue of the target with the hue at the position where it is normal. Therefore, the tonal difference between the noise point and the background can also be used as the characteristic to participate in the single classification machine learning training.
The above mentioned features are mainly trained using conventional machine learning. Of course, a deep learning method (convolutional neural network) can also be used to perform single class learning on the noise. However, deep learning generally requires more computing resources and is more costly. If the cost is reduced, the model is required to be lightened. This is consistent with the foregoing from a single classification concept.
Specifically, when a single classification noise reduction model is trained, a training sample marked with noise points needs to be obtained; then extracting features of each frame of image in the training sample marked with the noise points; and finally, inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining a single classification noise reduction model.
And 403, removing the non-parabolic object in the two continuous frames of images by using a single-classification non-parabolic model.
Based on the technical concept of the single-classification noise reduction model, the single-classification non-parabolic model can be trained, and a training sample marked with a non-parabolic model is obtained; extracting features of each frame of image in the training sample marked with the non-parabolic object; and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining a single classification non-parabolic model.
The process of extracting the features may specifically be: determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domains; determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image; determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space; and determining the area of each noise point or non-parabolic area, the ratio of the area of each noise point or non-parabolic area to the perimeter and the hue difference of each noise point or non-parabolic area to the background in each frame image as the characteristics of each frame image.
Under the condition that enough training samples of non-parabolic objects (flying birds, flying insects and the like) are collected, the noise can be removed in the same way, namely the characteristics of various non-parabolic objects are learned by using a single classification method.
In addition, the movement mode of the non-parabolic object is generally different from that of the parabolic falling object, and in the case of failure of identification by using a single classification model, the non-parabolic object can be distinguished according to the characteristics of the movement of the free falling object. See in particular the prior art high altitude parabolic starting point determination scheme.
And step 404, performing area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining parabolic targets in the two continuous frames of images.
The area comparison mainly aims at two continuous frames of images, and if the area of the target in the previous frame of image is smaller than or equal to the area threshold, when the ratio of the area of the target in the next frame of image to the area of the target in the previous frame of image is determined to be within a first area range, the target in the previous frame of image and the target in the next frame of image are determined to be continuous parabolic targets; and if the area of the target in the previous frame image is larger than the area threshold, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets when the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is determined to be in a second area range.
Wherein the area threshold, the first area range and the second area range are determined from parabolic experimental data.
When color comparison is carried out, extracting the outline of a target through the frame difference of the gray level images aiming at two continuous frames of images; determining the outline of the target in the color map from the color map corresponding to the gray map according to the outline of the target in the gray map; and calculating the image similarity of the outlines of the targets in the two continuous frames of images, and if the image similarity meets a similarity threshold, determining that the targets in the two continuous frames of images are continuous parabolic targets. Wherein, the similarity threshold value can be set according to experience.
In the specific implementation process, the targets detected at the two moments before and after need to be confirmed to be the same target, so that target comparison is needed. The comparison method comprises target area and color distribution.
First consider the change in parabolic area.
Normally, in the falling process of the parabola, the distance from the camera is closer and closer, and the area in the image is larger and larger. The target area at the previous time is a t The target area at the latter time is a t+1 Then a is t+1 >a t . However, when the parabola is small or is far from the camera, the detection error is large, and the area cannot be strictly restricted. It is reasonable to consider the area ratio of parabolic recognition at the front and back time instants (typically two pictures before and after) to be within a certain range. For example, the parameter is set to r (r)<1) Then the reasonable area ratio range is [ r,1/r ]](first area range). That is, the target area ratio of the current frame and the next frame is in the range, and the current frame and the next frame are regarded as the same target.
When the target is large enough (a) t >a thre ,a thre Area threshold), the target area error has a small influence, and generally, a can be considered as t+1 /a t >1, and a t+1 /a t K. K is a ratio calculated from image range (mapping conversion of pixel distance to true distance) without considering an error.
Actual length L represented by pixel points in horizontal direction x The coordinate y, which depends on the vertical direction of the pixel, can be expressed as:
Figure BDA0002998732890000101
the coordinate y of the pixel point in the vertical direction of the plane for calculating the actual distance Can be expressed as:
y′=γL x (y)……………………………………(2)
the three parameters in equations (1) and (2) can be obtained by multi-point calibration of the pixel length and the actual length in the picture.
For convenience of explanation, assume a practical height of H, widthThe rectangular object with W is translated on the floor, and is represented as h with one pixel height on the picture of the camera pix Width is w pix . Its center coordinate on the screen is (x, y). Then, one can obtain:
Figure BDA0002998732890000111
H=γL x (y-0.5h pix )-γL x (y+0.5h pix )…………………(4)
from (3) can be obtained:
Figure BDA0002998732890000112
solving the quadratic equation by (4) can obtain:
Figure BDA0002998732890000113
then, the area ratio at time t +1 and t is:
Figure BDA0002998732890000114
taking into account the existence of errors, a t+1 /a t Should be in the range centered on K, i.e., [ K- ε K, K + ε K](second area range). ε is a positive number less than 1.
The three parameters (r, a) thre Epsilon) need to be determined by parabolic experiments.
Next, the color distribution difference of the two frames of objects before and after the comparison is also needed. The target contour can be extracted through the frame difference of the gray-scale image, and then the target at the same position in the original color image is extracted. Next, in essence, comparison of picture similarity is performed, and various algorithms can be implemented, such as three-channel color histogram comparison, grayscale histogram comparison, and the like, and a deep learning method can also be used, and the model needs to be lightened in consideration of consumption of computing resources.
In order to better explain the embodiment of the present invention, the following describes the above-mentioned process of high-altitude parabolic target recognition in a specific implementation scenario, specifically as follows:
as shown in fig. 5, the specific steps are as follows:
step 501, performing graying processing on the picture of each video frame.
And 502, performing difference between the two grayed frames to obtain a target point.
And 503, removing noise points by using a single classification noise reduction model.
As shown in fig. 6, the training process of the single-classification noise reduction model is specifically as follows:
step 601, selecting a video without parabolic or non-parabolic objects (birds, winged insects and the like) and acquiring enough noise samples.
Step 602, calculating the area of the noise point according to the number of the pixel points covered by the noise point connected domain.
Step 603, calculating the ratio of the area of each noise point to the perimeter.
And step 604, calculating the hue difference between the noise point and the background in the HSV color space.
Step 605, using the extracted features, performing single classification (isolated forest, one-class-SVM, etc.) model training on the noise points.
And 606, testing on the test set to ensure that sufficient accuracy and recall rate are achieved, and obtaining the trained single-classification noise reduction model.
Step 504, removing the non-parabolic object using the single classification non-parabolic model.
The specific process of the training process of the single-classification non-parabolic model shown in fig. 7 is as follows:
step 701, selecting a video with non-parabolic objects (birds, winged insects, and the like) to obtain enough non-parabolic samples.
And step 702, calculating the non-parabolic area according to the number of pixel points covered by the non-parabolic connected domain.
Step 703, calculating the ratio of each non-parabolic area to the perimeter.
Step 704, in the HSV color space, a hue difference between the non-parabolic object and the background is calculated.
Step 705, using the extracted features, single classification (isolated forest, one-class-SVM, etc.) model training is performed on the non-parabolic objects.
Step 706, tests are performed on the test set to ensure that sufficient accuracy and recall are achieved.
And 505, performing area comparison of adjacent time points on the target subjected to noise removal and non-parabolic target, wherein the target meeting the area comparison condition is a parabolic target.
A specific process of continuous target area comparison may be as shown in fig. 8, and specifically includes:
step 801, if the area of the target in the previous frame image is smaller than or equal to the area threshold, when it is determined that the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is within the first area range, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets.
The area of the target at the previous time is a t The area of the target at the subsequent time is a t+1 . If area a t A is less than or equal to thre (area threshold value), then judge a t+1 /a t Whether or not in [ r,1/r ]]In the range (first area range), 0<r<1. If so, the target is considered to be a continuous parabolic target.
Step 802, if the area of the target in the previous frame image is larger than the area threshold, when it is determined that the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is within a second area range, it is determined that the target in the previous frame image and the target in the next frame image are continuous parabolic targets.
If object a t Area greater than a thre Using equation (7) to obtain a t+1 /a t The standard ratio K of (A) is determined by considering the error coefficient t+1 /a t Whether or not [ K- ε K, K + ε K]Within a range (second area range). If so, the target is considered to be a continuous parabolic target.
Step 803, counting enough parabolic experimentsData, the three parameters (r, a) can be obtained thre 、ε)。
And step 506, performing color comparison of adjacent time points on the target with the noise and the non-parabolic target removed, wherein the target meeting the color comparison condition is the parabolic target.
The specific process of continuous target color contrast may be as shown in fig. 9, specifically as follows:
and step 901, extracting a target contour through a frame difference of the gray-scale image.
And step 902, taking out the target at the same position in the original color image.
Step 903, comparing the image similarity, such as three-channel color histogram comparison and gray histogram comparison, and performing a light weight operation on the model by using a deep learning method in consideration of consumption of computing resources.
The target obtained through the filtering in step 507 from step 503 to step 506 is considered to be a parabolic target.
In the embodiment of the invention, a multi-frame image of a video to be identified is obtained, two continuous frames of images in the multi-frame image are identified to obtain a target point, a single-classification noise reduction model is used for removing noise points in the two continuous frames of images, a single-classification non-parabolic model is used for removing non-parabolic objects in the two continuous frames of images, the noise points and the targets in the two continuous frames of images without the parabolic objects are subjected to area comparison and color comparison, and parabolic targets in the two continuous frames of images are determined. Because the single classification noise reduction model and the single classification non-parabolic model are used for removing noise points and non-parabolic objects in the image, the interference given in the image can be reduced, and the accuracy of parabolic target identification is improved.
Based on the same technical concept, fig. 10 exemplarily shows the structure of an apparatus for high-altitude parabolic target recognition and comparison provided by an embodiment of the present invention, and the apparatus can perform a flow of high-altitude parabolic target recognition and comparison.
As shown in fig. 10, the apparatus specifically includes:
an acquisition unit 1001 configured to acquire a multi-frame image of a video to be identified;
the processing unit 1002 is configured to identify two consecutive frames of images in the multiple frames of images to obtain a target point; removing noise points in the two continuous frames of images by using a single classification noise reduction model; removing the non-parabolic object in the two continuous frames of images by using a single-classification non-parabolic model; and carrying out area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining the parabolic targets in the two continuous frames of images.
Optionally, the processing unit 1002 is specifically configured to:
carrying out graying processing on the multi-frame image;
and performing difference calculation on two continuous frames of images in the grayed multi-frame images to determine a target point.
Optionally, the processing unit 1002 is specifically configured to:
determining the single classification noise reduction model according to the following steps:
acquiring a training sample marked with a noise point;
extracting features from each frame of image in the training samples marked with the noise points;
and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining the single classification noise reduction model.
Optionally, the processing unit 1002 is specifically configured to:
determining the single classification non-parabolic model according to the following steps comprising:
acquiring a training sample marked with a non-parabolic object;
extracting features from each frame of image in the training sample marked with the non-parabolic object;
and inputting the extracted features into a preset single classification model for training until a preset training target is met, and obtaining the single classification non-parabolic model.
Optionally, the processing unit 1002 is specifically configured to:
determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domains;
determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image;
determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space;
determining the area of each noise point or non-parabolic object, the ratio of the area of each noise point or non-parabolic object to the perimeter and the hue difference of each noise point or non-parabolic object and the background in each frame image as the characteristics of each frame image.
Optionally, the processing unit 1002 is specifically configured to:
for the two continuous frames of images, if the area of the target in the previous frame of image is smaller than or equal to the area threshold, determining that the target in the previous frame of image and the target in the next frame of image are continuous parabolic targets when the ratio of the area of the target in the next frame of image to the area of the target in the previous frame of image is determined to be within a first area range;
if the area of the target in the previous frame image is larger than the area threshold value, when the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is determined to be in a second area range, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets;
the area threshold, the first area range, and the second area range are determined from parabolic experimental data.
Optionally, the processing unit 1002 is specifically configured to:
extracting the outline of the target by the frame difference of the gray level images aiming at the two continuous frames of images;
determining the outline of the target in the color map from the color map corresponding to the gray map according to the outline of the target in the gray map;
and calculating the image similarity of the outlines of the targets in the two continuous frames of images, and if the image similarity meets a similarity threshold, determining that the targets in the two continuous frames of images are continuous parabolic targets.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the high-altitude parabolic target identification and comparison method according to the obtained program.
Based on the same technical concept, embodiments of the present invention further provide a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer reads and executes the computer-readable instructions, the computer is caused to execute the above-mentioned method for identifying and comparing high-altitude parabolic targets.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (7)

1. A method for identifying and comparing high-altitude parabolic targets is characterized by comprising the following steps:
acquiring multi-frame images of a video to be identified, and identifying two continuous frames of images in the multi-frame images to obtain a target point;
removing noise points in the two continuous frames of images by using a single classification noise reduction model; the single-classification noise reduction model is obtained by extracting features from each frame of image in a training sample marked with noise points and inputting the extracted features into a preset single-classification model for training until a preset training target is met;
removing the non-parabolic object in the two continuous frames of images by using a single-classification non-parabolic model; the single-classification non-parabolic model is obtained by extracting features from each frame of image in a training sample marked with a non-parabolic model, and inputting the extracted features into a preset single-classification model for training until a preset training target is met;
the feature extraction for each frame of image comprises the following steps: determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domains; determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image; determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space; determining the area of each noise point or non-parabolic object, the ratio of the area of each noise point or non-parabolic object to the perimeter and the hue difference of each noise point or non-parabolic object and the background in each frame of image as the characteristics of each frame of image;
and carrying out area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining the parabolic targets in the two continuous frames of images.
2. The method of claim 1, wherein said identifying a target point for two consecutive images of said plurality of images comprises:
carrying out graying processing on the multi-frame image;
and performing difference calculation on two continuous frames of images in the grayed multi-frame images to determine a target point.
3. The method of any one of claims 1 to 2, wherein the area comparing the noise point removed and the target in the two consecutive images of the non-parabolic object comprises:
for the two continuous frames of images, if the area of the target in the previous frame of image is smaller than or equal to the area threshold, determining that the target in the previous frame of image and the target in the next frame of image are continuous parabolic targets when the ratio of the area of the target in the next frame of image to the area of the target in the previous frame of image is determined to be within a first area range;
if the area of the target in the previous frame image is larger than the area threshold, determining that the target in the previous frame image and the target in the next frame image are continuous parabolic targets when the ratio of the area of the target in the next frame image to the area of the target in the previous frame image is determined to be in a second area range;
the area threshold, the first area range, and the second area range are determined from parabolic experimental data.
4. The method of any one of claims 1 to 2, wherein the color comparing the noise point removed and the target in the two consecutive images of the non-parabolic object comprises:
extracting the outline of the target by the frame difference of the gray level images aiming at the two continuous frames of images;
determining the contour of the target in a color map from the color map corresponding to the gray map according to the contour of the target in the gray map;
and calculating the image similarity of the outlines of the targets in the two continuous frames of images, and if the image similarity meets a similarity threshold, determining that the targets in the two continuous frames of images are continuous parabolic targets.
5. An apparatus for identifying and comparing high altitude parabolic target, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring multi-frame images of a video to be identified;
the processing unit is used for identifying two continuous frames of images in the multi-frame images to obtain a target point; removing noise points in the two continuous frames of images by using a single classification noise reduction model; the single-classification noise reduction model is obtained by extracting features from each frame of image in a training sample marked with noise points and inputting the extracted features into a preset single-classification model for training until a preset training target is met; removing the non-parabolic object in the two continuous frames of images by using a single-classification non-parabolic model; the single-classification non-parabolic model is obtained by extracting features from each frame of image in a training sample marked with a non-parabolic model, and inputting the extracted features into a preset single-classification model for training until a preset training target is met; the feature extraction for each frame of image comprises the following steps: determining the area of the noise points or the non-parabolic areas in each frame of image according to the number of pixel points covered by the noise points or the non-parabolic connected domains; determining the ratio of the area to the perimeter of each noise point or non-parabolic object in each frame of image; determining the hue difference between each noise point or non-parabolic object and the background in each frame of image according to the color space; determining the area of each noise point or non-parabolic object, the ratio of the area of each noise point or non-parabolic object to the perimeter and the hue difference of each noise point or non-parabolic object and the background in each frame of image as the characteristics of each frame of image;
and carrying out area comparison and color comparison on the targets in the two continuous frames of images without the noise points and the non-parabolic objects, and determining the parabolic targets in the two continuous frames of images.
6. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 4 in accordance with the obtained program.
7. A computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 4.
CN202110338987.8A 2021-03-30 2021-03-30 High-altitude parabolic target identification and comparison method and device Active CN113065454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338987.8A CN113065454B (en) 2021-03-30 2021-03-30 High-altitude parabolic target identification and comparison method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338987.8A CN113065454B (en) 2021-03-30 2021-03-30 High-altitude parabolic target identification and comparison method and device

Publications (2)

Publication Number Publication Date
CN113065454A CN113065454A (en) 2021-07-02
CN113065454B true CN113065454B (en) 2023-01-17

Family

ID=76564441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338987.8A Active CN113065454B (en) 2021-03-30 2021-03-30 High-altitude parabolic target identification and comparison method and device

Country Status (1)

Country Link
CN (1) CN113065454B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998794A (en) * 2022-05-31 2022-09-02 天翼爱音乐文化科技有限公司 High-altitude parabolic recognition method, system, device and storage medium
CN115294744B (en) * 2022-07-29 2024-03-22 杭州海康威视数字技术股份有限公司 Image display system, method, device and equipment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4780374B2 (en) * 2005-04-21 2011-09-28 Nkワークス株式会社 Image processing method and program for suppressing granular noise, and granular suppression processing module for implementing the method
CN101493400B (en) * 2008-01-25 2012-06-27 深圳迈瑞生物医疗电子股份有限公司 Automatic classification correcting method based on shape characteristic
CN101795400B (en) * 2010-03-16 2013-03-27 上海复控华龙微系统技术有限公司 Method for actively tracking and monitoring infants and realization system thereof
CN102298709A (en) * 2011-09-07 2011-12-28 江西财经大学 Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
US8855375B2 (en) * 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US20200377956A1 (en) * 2017-08-07 2020-12-03 The Johns Hopkins University Methods and materials for assessing and treating cancer
CN109101944B (en) * 2018-08-27 2022-04-08 四创科技有限公司 Real-time video monitoring method for identifying garbage thrown into river channel
CN110751044B (en) * 2019-09-19 2022-07-29 杭州电子科技大学 Urban noise identification method based on deep network migration characteristics and augmented self-coding
CN111260693B (en) * 2020-01-20 2023-07-28 北京中科晶上科技股份有限公司 High-altitude parabolic detection method
CN111931599B (en) * 2020-07-20 2023-04-18 浙江大华技术股份有限公司 High altitude parabolic detection method, equipment and storage medium
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
CN112213707A (en) * 2020-10-14 2021-01-12 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Method for predicting action range of ship-borne pulse radar in evaporation waveguide environment
CN112257557B (en) * 2020-10-20 2022-08-02 中国电子科技集团公司第五十八研究所 High-altitude parabolic detection and identification method and system based on machine vision
CN112037266B (en) * 2020-11-05 2021-02-05 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN113065454A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN110310264B (en) DCNN-based large-scale target detection method and device
CN110060237B (en) Fault detection method, device, equipment and system
EP1805715B1 (en) A method and system for processing video data
EP3246874B1 (en) Method and apparatus for updating a background model used for background subtraction of an image
CN109636771B (en) Flight target detection method and system based on image processing
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
US20060221181A1 (en) Video ghost detection by outline
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN112508033A (en) Detection method, storage medium, and electronic apparatus
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN113743378A (en) Fire monitoring method and device based on video
CN116485779B (en) Adaptive wafer defect detection method and device, electronic equipment and storage medium
CN111402185B (en) Image detection method and device
CN109977965B (en) Method and device for determining detection target in remote sensing airport image
CN115661475A (en) Image foreign matter identification method, device, equipment and storage medium
CN112261402B (en) Image detection method and system and camera shielding monitoring method and system
CN115909151A (en) Method for identifying serial number of motion container under complex working condition
CN112801963B (en) Video image occlusion detection method and system
CN110334703B (en) Ship detection and identification method in day and night image
CN112307916A (en) Alarm monitoring method based on visible light camera
US20190197349A1 (en) Image identification method and image identification device
CN111860261A (en) Passenger flow value statistical method, device, equipment and medium
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN117690096B (en) Contact net safety inspection system adapting to different scenes
CN114140792B (en) Micro target detection method and device based on dynamic sliding window

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant