CN118097526A - Flying object identification method and system based on image processing - Google Patents

Flying object identification method and system based on image processing Download PDF

Info

Publication number
CN118097526A
CN118097526A CN202410505939.7A CN202410505939A CN118097526A CN 118097526 A CN118097526 A CN 118097526A CN 202410505939 A CN202410505939 A CN 202410505939A CN 118097526 A CN118097526 A CN 118097526A
Authority
CN
China
Prior art keywords
flying object
target
image
target flying
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410505939.7A
Other languages
Chinese (zh)
Inventor
李家悦
郑明炀
苏小杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Zhuohang Special Equipment Co ltd
Original Assignee
Fujian Zhuohang Special Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Zhuohang Special Equipment Co ltd filed Critical Fujian Zhuohang Special Equipment Co ltd
Priority to CN202410505939.7A priority Critical patent/CN118097526A/en
Publication of CN118097526A publication Critical patent/CN118097526A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the technical field of computer vision and discloses a method and a system for identifying a flying object based on image processing.

Description

Flying object identification method and system based on image processing
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for identifying flying objects based on image processing.
Background
The identification and indication of the flying object is an important function of security monitoring, mixed (augmented) reality and other applications. The intelligent and rapid recognition of the flying object and the positioning indication of the flying object are the basis for the next treatment work. In the related art, if the current shooting background color is relatively close to the color of the flying object, the accuracy of identifying the flying object is poor, and the requirements of users cannot be met.
Disclosure of Invention
The invention mainly aims to provide a method and a system for identifying flying objects based on image processing, and aims to solve the technical problems that in the prior art, the accuracy of identifying the flying objects is poor and the requirements of users cannot be met under the condition that the color of a shooting background is close to that of the flying objects.
To achieve the above object, in a first aspect, an embodiment of the present application provides a method for identifying a flying object based on image processing, the method including:
Acquiring a sequence frame image shot by an image pickup device, wherein the sequence frame image is an image sequence combination formed by images continuously shot by the image pickup device within a preset duration according to a time sequence;
performing target segmentation on the sequence frame images based on the gray values of the images to obtain a plurality of target flying object images;
inputting the plurality of target flying object images into a pre-trained convolutional neural network for feature recognition to obtain a prediction type and a corresponding prediction probability of the target flying object, wherein the prediction type is one of an unmanned plane and a flying bird;
Determining morphological change information of the target flying object according to the plurality of target flying object images under the condition that the prediction probability corresponding to the target flying object prediction type is smaller than or equal to a first probability threshold value;
inputting the morphological change information of the target flying object into a pre-trained morphological change estimation model to obtain morphological change probability;
And finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object.
In one possible implementation manner, the morphology change information includes ratio information of morphology change size to change time, and the determining morphology change information of the target flying object according to the plurality of target flying object images includes:
Determining a maximum value of the shape change of the target flying object and a corresponding time interval according to a plurality of target flying object images, wherein the shape of the target flying object comprises the length or the width of the target flying object;
and determining the ratio information of the form change size and the change time according to the maximum value of the form change of the target flying object and the corresponding time interval.
In one possible implementation manner, the morphological change information includes morphological change rate information, and the determining the morphological change information of the target flying object according to the plurality of target flying object images includes:
determining a maximum value and a minimum value of a target flying object form according to a plurality of target flying object images, wherein the target flying object form comprises the length or the width of a target flying object;
And determining the shape and size change rate information according to the shape maximum value and the shape minimum value of the target flying object.
In a possible implementation manner, the morphology change information includes ratio information of morphology change size to change time and morphology change rate information, and the step of inputting the morphology change information of the target flying object into a pre-trained morphology change estimation model to obtain morphology change probability includes:
inputting the ratio information of the shape change size to the change time and the shape change rate information into a pre-trained shape change estimation model to obtain shape change probability, wherein the shape change estimation model meets the following expression:
Wherein Bsxf is a shape change probability, spxs is a ratio of a shape change size to a change time, bpxs is a shape change rate, F1 and F2 are corresponding calculation weights, a is a attitude angle of a target flying object, a0 is a reference attitude angle, br is a probability correction constant, and the probability correction constant is positively correlated with a wind speed when an imaging device shoots.
In a possible implementation manner, the performing object segmentation on the sequence frame image based on the gray value of the image to obtain a plurality of object flying object images includes:
Acquiring a distribution curve graph of the number of pixels of each frame of original gray level image and gray level value in the sequence frame images;
Determining a gray value threshold according to the distribution curve graph, wherein the gray value threshold is a gray value corresponding to a point with the maximum slope of a curve section before a peak of the distribution curve graph;
Carrying out gray scale assignment on pixel points of each frame of original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstruction image;
And carrying out target segmentation on the gray scale reconstruction image of each frame of original gray scale image to obtain a target flying object image.
In a possible implementation manner, the performing gray scale assignment on the pixel points of each frame of the original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstruction image includes:
Carrying out numerical comparison on the gray value of each pixel point of the original gray image and the gray value threshold;
Performing first gray value assignment on pixel points with gray values larger than or equal to the gray value threshold in an original gray image, and performing second gray value assignment on pixel points with gray values smaller than the gray value threshold, wherein the difference value between the first gray value and the second gray value is larger than the maximum difference value between the gray values of the pixel points in the original gray image;
And carrying out first gray value and second gray value assignment on each frame of original gray image to obtain a corresponding gray reconstruction image.
In one possible implementation manner, the performing object segmentation on the gray scale reconstruction image of each frame of the original gray scale image to obtain a target flying object image includes:
carrying out boundary feature recognition on the gray scale reconstruction image of each frame of original gray scale image to obtain a boundary line of a target flying object;
and carrying out target segmentation according to the boundary line of the target flying object to obtain the target flying object image.
In one possible implementation, the method further includes:
and under the condition that the prediction probability corresponding to the target flying object prediction type is larger than a first probability threshold, taking the target flying object prediction type obtained by convolutional neural network feature recognition as a target recognition type.
In one possible implementation manner, the final confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object includes:
Under the condition that the predicted type of the target flying object is an unmanned aerial vehicle, correcting the predicted type of the target flying object and taking the flying bird as the target identification type of the target flying object if the morphological change probability is larger than or equal to a second probability threshold value; if the morphological change probability is smaller than a second probability threshold, confirming the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object;
Under the condition that the predicted type of the target flying object is a flying bird, if the shape change probability is larger than or equal to a second probability threshold value, the predicted type of the target flying object is confirmed, and the flying bird is used as the target identification type of the target flying object; and if the morphological change probability is smaller than a second probability threshold, correcting the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object.
In a second aspect, an embodiment of the present application further provides a system for identifying an object of flight, including a memory and a processor, where the memory is configured to store program code, and the processor is configured to invoke the program code to perform the method according to the first aspect.
Compared with the prior art, the method for identifying the flying object based on the image processing provided by the embodiment of the application comprises the steps of firstly acquiring a sequence frame image shot by an imaging device, and then carrying out target segmentation on the sequence frame image based on the gray value of the image to obtain a plurality of target flying object images; then inputting the plurality of target flying object images into a pre-trained convolutional neural network for feature recognition to obtain the prediction type and the corresponding prediction probability of the target flying object; determining morphological change information of the target flying object according to the plurality of target flying object images under the condition that the prediction probability corresponding to the target flying object prediction type is smaller than or equal to a first probability threshold value; inputting the morphological change information of the target flying object into a pre-trained morphological change estimation model to obtain morphological change probability; and finally, finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object. The method comprises the steps of firstly carrying out gray level binary segmentation on an original gray level image to obtain a target flying object image, so that the effects of enhancing a target and inhibiting a background are achieved under the condition of keeping the original outline of the flying object, thereby improving the accuracy of the acquisition of the target flying object image, then carrying out preliminary recognition of the flying object through convolutional neural network feature recognition, and finally carrying out morphological change prediction of the flying object by utilizing morphological change information so as to realize recognition correction and confirmation of the flying object to obtain the target recognition type of the flying object, and greatly improving the accuracy of the flying object recognition.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying flying objects based on image processing according to some embodiments of the present application;
FIG. 2 is a flow chart of a method for identifying flying objects based on image processing according to other embodiments of the present application;
FIG. 3 is a graph showing the distribution of the number of pixels and gray values of an original gray image according to some embodiments of the present application;
Fig. 4 is a schematic hardware architecture of a flying object recognition system according to some embodiments of the application.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, "and/or" throughout this document includes three schemes, taking a and/or B as an example, including a technical scheme, a technical scheme B, and a technical scheme that both a and B satisfy; in addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The identification and indication of the flying object is an important function of security monitoring, mixed (augmented) reality and other applications. The intelligent and rapid recognition of the flying object and the positioning indication of the flying object are the basis for the next treatment work. In the related art, if the current shooting background color is relatively close to the color of the flying object, the accuracy of identifying the flying object is poor, and the requirements of users cannot be met.
For example, if the color of the bird or the unmanned aerial vehicle is black or dark under the dark background environment, the outline features of the flying object cannot be accurately identified and extracted, so that the identification accuracy of the flying object is poor.
Aiming at the problems, the application provides a flying object identification method based on image processing, and the general thought of the flying object identification method based on the image processing is as follows: firstly, carrying out gray level binary segmentation on an original gray level image to obtain a target flying object image, so that the effects of enhancing a target and inhibiting a background are achieved under the condition of keeping the original outline of the flying object, thereby improving the accuracy of the acquisition of the target flying object image, then carrying out preliminary recognition of the flying object through convolutional neural network feature recognition, and finally carrying out morphological change prediction of the flying object by utilizing morphological change information so as to realize recognition correction and confirmation of the flying object, thereby obtaining the target recognition type of the flying object, and greatly improving the accuracy of the flying object recognition under a dark environment.
As shown in fig. 1-2, the specific steps of the image processing-based method of identifying an aircraft will be described primarily below, with the understanding that although a logical sequence is shown in the flowchart, in some cases the steps shown or described may be performed in a different order than that shown or described herein. Referring to fig. 1, the method comprises the steps of:
S100, acquiring sequence frame images shot by an image pickup device, wherein the sequence frame images are image sequence combinations formed by images continuously shot by the image pickup device within a preset duration according to time sequence;
Specifically, the image pickup device may be a wide-angle camera having a large-angle shooting range, for example, capable of shooting an object image in an angle range of 120 degrees; in order to acquire more images of different states of the target object, the imaging device may be further configured such that the imaging angle changes along with the movement of the target object. The sequential frame images are image sequence combinations formed by images obtained by continuous shooting of the image pickup device in a preset duration according to time sequence, for example, a plurality of images obtained by continuous shooting in 10S, and the plurality of images are combined according to time sequence. Therefore, the method can extract the target flying object images in different states according to the sequence frame images and analyze the morphological change condition of the target flying object, and can improve the accuracy of the flying object identification according to the feature identification of the target flying object images in different states.
S200, performing target segmentation on the sequence frame images based on the gray values of the images to obtain a plurality of target flying object images;
it can be understood that the flyer images in the dark background area are difficult to distinguish, but the gray value of the flyer images is still lower than that of most background images and has a certain edge characteristic, so that the embodiment of the application performs target segmentation on the sequence frame images according to the gray value of the images to obtain a plurality of target flyer images.
In one embodiment, the step S200 of performing object segmentation on the sequence frame images based on the gray values of the images to obtain a plurality of object images includes:
s210, acquiring a distribution curve graph of the number of pixels and gray values of each frame of original gray image in the sequence frame images;
S220, determining a gray value threshold according to the distribution curve, wherein the gray value threshold is a gray value corresponding to a point with the maximum slope of a curve section before a peak of the distribution curve;
s230, carrying out gray scale assignment on pixel points of each frame of original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstruction image;
S240, carrying out target segmentation on the gray scale reconstruction image of each frame of original gray scale image to obtain a target flying object image.
Specifically, firstly, a distribution curve graph of the number of pixels and the gray value of each frame of original gray image is obtained, then a gray value threshold is determined according to the distribution curve graph, gray assignment is carried out on the pixels of each frame of original gray image according to the gray value threshold to obtain a gray reconstructed image, and finally, target segmentation is carried out on the gray reconstructed image of each frame of original gray image to obtain a target flying object image.
It should be noted that, selecting a suitable gray value threshold is a key for image segmentation, as shown in fig. 3, in the captured original image, gray levels at two ends of the image are obviously reduced, and the number of pixel points with gray level distribution within a range of 0-10 forms a peak corresponding to the main gray level of the background image in the dark area; the number of pixels occupied by the flyer image is small, the gray level is lower than that of most background images, and the flyer image is positioned at the left side of the wave crest; the gray level of the flyer image is similar to the gray level corresponding to the wave crest, but the difference of the number of pixel points is large, the wave crest presents a steep front slope, and the steepest position of the front slope corresponds to the optimal gray value threshold for dividing the flyer and the background. Therefore, the method takes the gray value corresponding to the point with the maximum slope of the curve section before the peak of the distribution curve diagram (the position of the point A in fig. 3) as the gray value threshold, namely, the gray value at the position with the maximum gray change (the steepest curve) is taken as the gray value threshold before the gray value reaches the peak value, thereby playing the role of dividing the target image and the background image and greatly improving the accuracy of acquiring the target flying object image.
In one embodiment, the step S230 of performing gray scale assignment on pixels of each frame of the original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstructed image includes:
s2310, carrying out numerical comparison on the gray value of each pixel point of the original gray image and the gray value threshold;
S2320, performing first gray value assignment on pixel points with gray values larger than or equal to the gray value threshold in the original gray image, and performing second gray value assignment on pixel points with gray values smaller than the gray value threshold, wherein the difference value between the first gray value and the second gray value is larger than the maximum difference value between the gray values of the pixel points in the original gray image;
And S2330, assigning a first gray value and a second gray value to each frame of original gray image to obtain a corresponding gray reconstructed image.
Specifically, after the gray value threshold T is obtained, the gray value Tij of each pixel of the image can be compared with the gray value threshold T, the pixel points larger than the gray value threshold T are assigned with large gray values (e.g. 255), and the pixel points smaller than the gray value threshold T are assigned with small gray values (e.g. 0), so that a binary image, namely a gray reconstructed image, is obtained.
The gray value Tij of each pixel of the image can be compared with the gray value threshold T, the pixel points larger than the gray value threshold T are allocated with small gray values (e.g. 0), and the pixel points smaller than the gray value threshold T are allocated with large gray values (e.g. 255), so that the gray reconstructed image can be obtained. In this way, gray scale reassignment is carried out on pixel points of each frame of original gray scale image to obtain a gray scale reconstruction image, and gray scale values of the target image and the background image are purposefully enlarged in the gray scale reassignment process, so that the effect of effectively dividing the target image and the background image is achieved, and the accuracy of target division is improved.
In one embodiment, the step S240 of performing object segmentation on the gray-scale reconstructed image of each frame of the original gray-scale image to obtain an object image includes:
s2410, performing boundary feature recognition on the gray scale reconstruction image of each frame of original gray scale image to obtain a boundary line of a target flying object;
s2420, performing target segmentation according to the boundary line of the target flying object to obtain the target flying object image.
Specifically, after the gray level reconstruction image of each frame of the original gray level image is obtained, boundary feature recognition is firstly performed to obtain a boundary line of the target flying object, and then target segmentation is performed according to the boundary line of the target flying object to obtain the target flying object image, wherein the boundary line recognition can be performed by adopting an existing boundary recognition algorithm, and details are omitted.
S300, inputting the plurality of target flying object images into a pre-trained convolutional neural network for feature recognition to obtain a prediction type and a corresponding prediction probability of a target flying object, wherein the prediction type is one of an unmanned plane and a flying bird;
Specifically, in the target object identification process, the Convolutional Neural Network (CNN) continuously adjusts network parameters through optimization methods such as a back propagation algorithm, gradient descent and the like, so that output is more similar to a real label, and through a large amount of training data, the Convolutional Neural Network (CNN) can learn the mapping relation from an image to a target object type, thereby realizing identification prediction of the target object and outputting a prediction type and corresponding prediction probability.
By way of example, a Convolutional Neural Network (CNN) for unmanned aerial vehicle and bird identification can be obtained through a large amount of unmanned aerial vehicle and bird training data.
S400, determining morphological change information of the target flying object according to the plurality of target flying object images under the condition that the prediction probability corresponding to the target flying object prediction type is smaller than or equal to a first probability threshold value;
It can be understood that when the prediction probability obtained by the Convolutional Neural Network (CNN) recognition prediction is greater than the probability threshold (for example, the first probability threshold), the accuracy of recognition is higher, and the target aircraft prediction type obtained by the convolutional neural network feature recognition can be used as the target recognition type; when the prediction probability obtained by the Convolutional Neural Network (CNN) recognition prediction is less than or equal to a probability threshold (for example, a first probability threshold), the recognition accuracy is lower, and recognition correction/verification is needed by other methods to improve the recognition accuracy.
Specifically, when the prediction probability corresponding to the predicted type of the target flying object is smaller than or equal to the first probability threshold, the form change information of the target flying object is determined according to the plurality of target flying object images, so that further identification judgment can be performed through the form change information of the target flying object.
In one embodiment, the morphology change information includes a ratio of morphology change size to change time, and the step S400 includes determining morphology change information of the target flying object according to the plurality of target flying object images, including:
S410, determining the maximum value of the shape change of the target flying object and the corresponding time interval according to a plurality of target flying object images, wherein the shape of the target flying object comprises the length or the width of the target flying object;
S420, determining the ratio information of the shape change size and the change time according to the maximum value of the shape change of the target flying object and the corresponding time interval.
Specifically, the maximum value of the shape change of the target flying object and the corresponding time interval are obtained from a plurality of target flying object images, and the ratio of the maximum value of the shape change of the target flying object to the corresponding time interval is the ratio of the shape change size to the change time.
By way of example, the maximum value of the length change of the target flying object and the corresponding time interval are obtained in the multiple target flying object images, that is, the length change rate of the target flying object is obtained, and since the probability of the shape change of the unmanned aerial vehicle in the flying process is smaller and the change rate is smaller, the probability of the shape change of the bird in the flying process is larger and the change rate is larger, the shape change (length change) rate of the target flying object is larger and can be regarded as the bird, and the shape change (length change) rate of the target flying object is smaller and can be regarded as the unmanned aerial vehicle.
In another embodiment, the morphology change information includes morphology size change rate information, and the step S400 of determining morphology change information of the target flying object according to the plurality of target flying object images includes:
s430, determining a maximum value and a minimum value of a target flying object shape according to a plurality of target flying object images, wherein the target flying object shape comprises the length or the width of a target flying object;
S440, determining the shape and size change rate information according to the maximum shape and minimum shape of the target flying object.
Specifically, a maximum value of the shape of the target flying object and a minimum value of the shape of the target flying object are obtained from a plurality of target flying object images, and the ratio of the difference between the maximum value and the minimum value to the minimum value of the shape of the target flying object is the shape and size change rate.
The method comprises the steps of obtaining a maximum value of the width of a target flying object and a minimum value of the width of the target flying object from a plurality of target flying object images, and calculating to obtain a shape and size change rate (width change rate) according to the maximum value of the width of the target flying object and the minimum value of the width of the target flying object.
S500, inputting the morphological change information of the target flying object into a pre-trained morphological change estimation model to obtain morphological change probability;
It will be appreciated that since the morphological change information of the target flying object is acquired by the image recognition technique, the accuracy of the morphological change information is affected by various factors, for example, the accuracy of the image extraction of the target flying object. Therefore, the morphological change of the target flying object obtained by the image recognition technology is not necessarily caused by the actual morphological change of the flying object, and may be caused by the recognition error. In order to reduce the influence of image recognition errors, the morphological change information of the target flying object can be input into a pre-trained morphological change estimation model to obtain the morphological change probability, and when the morphological change probability is large, the morphological change of the flying object can be stated or can be stated with high probability; when the shape change probability is small, it can be stated or the probability is high that the flying object does not actually change shape.
In one embodiment, the morphology change information includes morphology change size and change time ratio information and morphology change rate information, and the step S500 includes inputting the morphology change information of the target flying object into a pre-trained morphology change estimation model to obtain morphology change probability, including:
S510, inputting the ratio information of the shape change size and the change time and the shape change rate information into a pre-trained shape change prediction model to obtain shape change probability, wherein the shape change prediction model meets the following expression:
wherein Bsxf is a shape change probability, spxs is a ratio of a shape change size to a change time, bpxs is a shape change rate, F1 and F2 are corresponding calculation weights, a is a attitude angle of a target flying object, a0 is a reference attitude angle, br is a probability correction constant (greater than 0), and the probability correction constant is positively correlated with a wind speed when the imaging device shoots.
It should be noted that, spxs units are cm/s, the attitude angle of the target flying object may be an elevation angle or a depression angle, and the expression of the morphological change prediction model is a mathematical model obtained by performing data training on multiple sets of original data (only taking values and neglecting units).
It can be understood that the larger the ratio of the shape change size to the change time is, the larger the probability of the shape change of the target flying object is, and the larger the shape change rate is, the larger the probability of the shape change of the target flying object is; the greater the elevation angle or the depression angle of the target flying object is, the greater the difficulty in adjusting the posture is, so that the probability of the form change of the target flying object is smaller.
It should be further noted that, the wind speed may have a certain influence on the calculation of the shape change probability, for example, when the wind speed is large, there is a certain unfolding effect on the shape of the target flying object, so the shape change probability obtained by the calculation of the shape change estimated model after the shape change information is obtained through image recognition is large, therefore, the calculated value of the original model needs to be corrected, and the larger the wind speed is, the larger the correction amount (probability correction constant Br) is, so the shape change probability of the target flying object can be accurately obtained through the shape change estimated model.
And S600, finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object.
It can be understood that, because the probability of the shape change of the unmanned aerial vehicle in the flight process is smaller, the probability of the shape change of the flying bird in the flight process is larger, and therefore, when the probability of the shape change is larger, the probability of the target flying object being the flying bird is larger, and when the probability of the shape change is smaller, the probability of the target flying object being the unmanned aerial vehicle is larger.
In one embodiment, the step S600 of finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object includes:
S610, correcting the predicted type of the target flying object and taking the flying bird as the target identification type of the target flying object if the shape change probability is greater than or equal to a second probability threshold value under the condition that the predicted type of the target flying object is an unmanned plane; if the morphological change probability is smaller than a second probability threshold, confirming the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object;
Specifically, under the condition that a predicted type of a target flying object is obtained through preliminary recognition of a Convolutional Neural Network (CNN) and is an unmanned aerial vehicle, if the morphological change probability obtained through calculation of a morphological change prediction model is larger than or equal to a preset value (a second probability threshold value), the predicted type of the target flying object is corrected and the flying bird is used as the target recognition type of the target flying object when the probability of the target flying object is larger; if the morphological change probability obtained through the morphological change prediction model calculation is smaller than a preset value (a second probability threshold value), at the moment, the probability that the target flying object is the unmanned aerial vehicle is larger, and finally determining the type of the target flying object as the unmanned aerial vehicle; therefore, the identification or correction of the target identification type is carried out according to the morphological change probability on the basis of the preliminary identification of the Convolutional Neural Network (CNN), and the accuracy of the identification of the flying object is greatly improved.
S620, if the predicted type of the target flying object is a flying bird, if the shape change probability is greater than or equal to a second probability threshold value, confirming the predicted type of the target flying object, and taking the flying bird as the target identification type of the target flying object; and if the morphological change probability is smaller than a second probability threshold, correcting the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object.
Specifically, under the condition that a Convolutional Neural Network (CNN) preliminarily identifies that the predicted type of the target flying object is a bird, if the morphological change probability calculated by a morphological change prediction model is larger than or equal to a preset value (a second probability threshold value), the probability that the target flying object is the bird is larger at the moment, and the type of the target flying object is finally identified as the bird; if the morphological change probability obtained through calculation of the morphological change prediction model is smaller than a preset value (a second probability threshold), the probability that the target flying object is the unmanned aerial vehicle is larger at the moment, correcting the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object; therefore, the identification or correction of the target identification type is carried out according to the morphological change probability on the basis of the preliminary identification of the Convolutional Neural Network (CNN), and the accuracy of the identification of the flying object is greatly improved.
Based on the method, the method for identifying the flying object based on image processing firstly carries out binary segmentation on the original gray level image to obtain the image of the target flying object, thus achieving the effects of enhancing the target and inhibiting the background under the condition of keeping the original outline of the flying object, improving the accuracy of the acquisition of the image of the target flying object, then carrying out preliminary identification of the flying object through convolutional neural network feature identification, and finally carrying out morphological change prediction of the flying object by utilizing morphological change information so as to realize identification correction and confirmation of the flying object to obtain the target identification type of the flying object, thereby greatly improving the accuracy of the identification of the flying object.
The embodiment of the application also provides a flying object identification system, referring to fig. 4, fig. 4 is a schematic hardware structure diagram of the flying object identification system according to some embodiments of the application; the flyer identification system includes a memory 110 and a processor 120, the memory 110 being configured to store program code, the processor 120 being configured to invoke the program code to perform the method as described above.
Wherein the processor 120 is configured to provide computing and control capabilities to control the flyer recognition system to perform corresponding tasks, for example, to control the flyer recognition system to perform the image processing-based flyer recognition method of any of the method embodiments described above, the method comprising: acquiring a sequence frame image shot by an image pickup device, wherein the sequence frame image is an image sequence combination formed by images continuously shot by the image pickup device within a preset duration according to a time sequence; performing target segmentation on the sequence frame images based on the gray values of the images to obtain a plurality of target flying object images; inputting the plurality of target flying object images into a pre-trained convolutional neural network for feature recognition to obtain a prediction type and a corresponding prediction probability of the target flying object, wherein the prediction type is one of an unmanned plane and a flying bird; determining morphological change information of the target flying object according to the plurality of target flying object images under the condition that the prediction probability corresponding to the target flying object prediction type is smaller than or equal to a first probability threshold value; inputting the morphological change information of the target flying object into a pre-trained morphological change estimation model to obtain morphological change probability; and finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object.
Processor 120 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a hardware chip, or any combination thereof; it may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
The memory 110 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image processing-based method for identifying a flying object according to an embodiment of the present application. The processor 120 may implement the image processing-based method of identifying flying objects in any of the method embodiments described above by running non-transitory software programs, instructions, and modules stored in the memory 110.
In particular, memory 110 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); memory 110 may also include non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HARD DISK DRIVE, HDD) or solid-state disk (solid-state drive-STATE DRIVE, SSD) or other non-transitory solid state storage device; memory 110 may also include a combination of the types of memory described above.
In summary, the flyer identification system of the present application adopts any one of the above technical solutions of the flyer identification method embodiments based on image processing, so that the flyer identification system at least has the beneficial effects brought by the technical solutions of the above embodiments, and will not be described in detail herein.
Embodiments of the present application also provide a computer-readable storage medium, such as a memory, including program code executable by a processor to perform the method for identifying an object of flight based on image processing in the above embodiments. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CDROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The processor of the flyer identification system reads the program code from the computer readable storage medium, and the processor executes the program code to perform the image processing-based flyer identification method steps provided in the above-described embodiments.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, where the program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (10)

1. A method for identifying a flying object based on image processing, the method comprising:
Acquiring a sequence frame image shot by an image pickup device, wherein the sequence frame image is an image sequence combination formed by images continuously shot by the image pickup device within a preset duration according to a time sequence;
performing target segmentation on the sequence frame images based on the gray values of the images to obtain a plurality of target flying object images;
inputting the plurality of target flying object images into a pre-trained convolutional neural network for feature recognition to obtain a prediction type and a corresponding prediction probability of the target flying object, wherein the prediction type is one of an unmanned plane and a flying bird;
Determining morphological change information of the target flying object according to the plurality of target flying object images under the condition that the prediction probability corresponding to the target flying object prediction type is smaller than or equal to a first probability threshold value;
inputting the morphological change information of the target flying object into a pre-trained morphological change estimation model to obtain morphological change probability;
And finally confirming the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object.
2. The method for identifying a flying object based on image processing according to claim 1, wherein the morphological change information includes a ratio information of a morphological change size to a change time, and the determining the morphological change information of the target flying object from the plurality of target flying object images includes:
Determining a maximum value of the shape change of the target flying object and a corresponding time interval according to a plurality of target flying object images, wherein the shape of the target flying object comprises the length or the width of the target flying object;
and determining the ratio information of the form change size and the change time according to the maximum value of the form change of the target flying object and the corresponding time interval.
3. The method for identifying a flying object based on image processing according to claim 1, wherein the morphological change information includes morphological change rate information, and wherein the determining the morphological change information of the target flying object from the plurality of target flying object images includes:
determining a maximum value and a minimum value of a target flying object form according to a plurality of target flying object images, wherein the target flying object form comprises the length or the width of a target flying object;
And determining the shape and size change rate information according to the shape maximum value and the shape minimum value of the target flying object.
4. The method for identifying a flying object based on image processing according to claim 1, wherein the shape change information includes shape change size and change time ratio information and shape change rate information, and the step of inputting the shape change information of the target flying object into a pre-trained shape change estimation model to obtain the shape change probability comprises the steps of:
inputting the ratio information of the shape change size to the change time and the shape change rate information into a pre-trained shape change estimation model to obtain shape change probability, wherein the shape change estimation model meets the following expression:
Wherein Bsxf is a shape change probability, spxs is a ratio of a shape change size to a change time, bpxs is a shape change rate, F1 and F2 are corresponding calculation weights, a is a attitude angle of a target flying object, a0 is a reference attitude angle, br is a probability correction constant, and the probability correction constant is positively correlated with a wind speed when an imaging device shoots.
5. The method for identifying a flying object based on image processing according to claim 1, wherein the performing object segmentation on the sequence frame image based on the gray value of the image to obtain a plurality of object flying object images comprises:
Acquiring a distribution curve graph of the number of pixels of each frame of original gray level image and gray level value in the sequence frame images;
Determining a gray value threshold according to the distribution curve graph, wherein the gray value threshold is a gray value corresponding to a point with the maximum slope of a curve section before a peak of the distribution curve graph;
Carrying out gray scale assignment on pixel points of each frame of original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstruction image;
And carrying out target segmentation on the gray scale reconstruction image of each frame of original gray scale image to obtain a target flying object image.
6. The method for identifying a flying object based on image processing according to claim 5, wherein performing gray scale assignment on pixels of each frame of original gray scale image according to the gray scale value threshold value to obtain a gray scale reconstruction image comprises:
Carrying out numerical comparison on the gray value of each pixel point of the original gray image and the gray value threshold;
Performing first gray value assignment on pixel points with gray values larger than or equal to the gray value threshold in an original gray image, and performing second gray value assignment on pixel points with gray values smaller than the gray value threshold, wherein the difference value between the first gray value and the second gray value is larger than the maximum difference value between the gray values of the pixel points in the original gray image;
And carrying out first gray value and second gray value assignment on each frame of original gray image to obtain a corresponding gray reconstruction image.
7. The method for identifying a flying object based on image processing according to claim 5, wherein said performing object segmentation on the gray scale reconstructed image of each frame of the original gray scale image to obtain the object image comprises:
carrying out boundary feature recognition on the gray scale reconstruction image of each frame of original gray scale image to obtain a boundary line of a target flying object;
and carrying out target segmentation according to the boundary line of the target flying object to obtain the target flying object image.
8. The image processing-based flyer identification method of claim 1, further comprising:
and under the condition that the prediction probability corresponding to the target flying object prediction type is larger than a first probability threshold, taking the target flying object prediction type obtained by convolutional neural network feature recognition as a target recognition type.
9. The method for identifying a flying object based on image processing according to claim 1, wherein the final confirmation of the predicted type of the target flying object according to the morphological change probability to obtain the target identification type of the target flying object comprises:
Under the condition that the predicted type of the target flying object is an unmanned aerial vehicle, correcting the predicted type of the target flying object and taking the flying bird as the target identification type of the target flying object if the morphological change probability is larger than or equal to a second probability threshold value; if the morphological change probability is smaller than a second probability threshold, confirming the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object;
Under the condition that the predicted type of the target flying object is a flying bird, if the shape change probability is larger than or equal to a second probability threshold value, the predicted type of the target flying object is confirmed, and the flying bird is used as the target identification type of the target flying object; and if the morphological change probability is smaller than a second probability threshold, correcting the predicted type of the target flying object, and taking the unmanned aerial vehicle as the target recognition type of the target flying object.
10. A flyer identification system comprising a memory for storing program code and a processor for invoking the program code to perform the method of any of claims 1 to 9.
CN202410505939.7A 2024-04-25 2024-04-25 Flying object identification method and system based on image processing Pending CN118097526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410505939.7A CN118097526A (en) 2024-04-25 2024-04-25 Flying object identification method and system based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410505939.7A CN118097526A (en) 2024-04-25 2024-04-25 Flying object identification method and system based on image processing

Publications (1)

Publication Number Publication Date
CN118097526A true CN118097526A (en) 2024-05-28

Family

ID=91163293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410505939.7A Pending CN118097526A (en) 2024-04-25 2024-04-25 Flying object identification method and system based on image processing

Country Status (1)

Country Link
CN (1) CN118097526A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
WO2021114892A1 (en) * 2020-05-29 2021-06-17 平安科技(深圳)有限公司 Environmental semantic understanding-based body movement recognition method, apparatus, device, and storage medium
US20210390728A1 (en) * 2021-01-21 2021-12-16 Beijing Baidu Netcom Science And Technology Co., Ltd. Object area measurement method, electronic device and storage medium
CN116630832A (en) * 2023-07-21 2023-08-22 江西现代职业技术学院 Unmanned aerial vehicle target recognition method, unmanned aerial vehicle target recognition system, computer and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
WO2021114892A1 (en) * 2020-05-29 2021-06-17 平安科技(深圳)有限公司 Environmental semantic understanding-based body movement recognition method, apparatus, device, and storage medium
US20210390728A1 (en) * 2021-01-21 2021-12-16 Beijing Baidu Netcom Science And Technology Co., Ltd. Object area measurement method, electronic device and storage medium
CN116630832A (en) * 2023-07-21 2023-08-22 江西现代职业技术学院 Unmanned aerial vehicle target recognition method, unmanned aerial vehicle target recognition system, computer and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘宜成: "基于轨迹和形态识别的无人机检测方法", 《计算机工程》, 31 December 2020 (2020-12-31) *

Similar Documents

Publication Publication Date Title
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
CN108388879B (en) Target detection method, device and storage medium
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN111696132B (en) Target tracking method, device, computer readable storage medium and robot
CN111080526B (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
KR20180105876A (en) Method for tracking image in real time considering both color and shape at the same time and apparatus therefor
CN110400338B (en) Depth map processing method and device and electronic equipment
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN113239875B (en) Method, system and device for acquiring face characteristics and computer readable storage medium
CN112529827A (en) Training method and device for remote sensing image fusion model
CN112001883B (en) Optimization method and device for vehicle target image and computer equipment
CN116486288A (en) Aerial target counting and detecting method based on lightweight density estimation network
CN114727024A (en) Automatic exposure parameter adjusting method and device, storage medium and shooting equipment
CN111507340B (en) Target point cloud data extraction method based on three-dimensional point cloud data
CN112733672A (en) Monocular camera-based three-dimensional target detection method and device and computer equipment
US20220088455A1 (en) Golf ball set-top detection method, system and storage medium
US10467486B2 (en) Method for evaluating credibility of obstacle detection
CN118097526A (en) Flying object identification method and system based on image processing
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN113033256B (en) Training method and device for fingertip detection model
CN110288633B (en) Target tracking method and device, readable storage medium and electronic equipment
CN117455936B (en) Point cloud data processing method and device and electronic equipment
CN115965944B (en) Target information detection method, device, driving device and medium
CN110647890B (en) High-performance image feature extraction and matching method, system and storage medium
CN118311955A (en) Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination