CN112668454A - Bird micro-target identification method based on multi-sensor fusion - Google Patents
Bird micro-target identification method based on multi-sensor fusion Download PDFInfo
- Publication number
- CN112668454A CN112668454A CN202011554996.2A CN202011554996A CN112668454A CN 112668454 A CN112668454 A CN 112668454A CN 202011554996 A CN202011554996 A CN 202011554996A CN 112668454 A CN112668454 A CN 112668454A
- Authority
- CN
- China
- Prior art keywords
- moment
- image
- sensor fusion
- target
- plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 238000011426 transformation method Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000009194 climbing Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a bird micro-target identification method based on multi-sensor fusion, belonging to the technical field of airport safety; firstly, collecting a sample atlas, preprocessing the sample atlas, extracting Hu moment characteristics and Zernike moment characteristics, carrying out BP neural network training, and storing the trained neural network; and then, carrying out the same pretreatment on the target picture to be recognized, extracting the Hu moment characteristic and the Zernike moment characteristic, inputting the Hu moment characteristic and the Zernike moment characteristic into a value neural network, and recognizing by using a neighbor classification method. The invention improves the calculation speed, the recognition efficiency of the traditional neighbor classification method and the accuracy of the micro target.
Description
Technical Field
The invention relates to a bird micro-target identification method based on multi-sensor fusion, and belongs to the technical field of airport safety.
Background
The speed of the flying bird in the air is negligible relative to the speed of the airplane, so that the flying bird is still relative to the airplane, or conversely, the speed of the airplane is still relative to the flying bird, so that the speed of the flying bird is the speed of the airplane, and the bird touches the airplane at the speed of a bullet, which can cause serious consequences.
The bird collision is one of the biggest factors threatening aviation safety, the bird collision on the airplane can not only collide glass, but also can collide an engine, and the damage to the airplane and the death of people can easily occur.
When the airplane climbs to a certain flight height, the chance of colliding with the bird in flight is rare. However, in the takeoff and climbing stage, the height of birds can be reached, and the takeoff and climbing is performed when the flight safety coefficient of the airplane is relatively low. Therefore, birds near airports need to be removed in a timely manner. Therefore, the morphology, the birth and death rules and the risk factor of various birds need to be researched, and the visual identification is time-consuming and inefficient, so that the bird micro-target identification method based on multi-sensor fusion is needed.
Disclosure of Invention
In order to solve the technical problem, the invention provides a bird micro-target identification method based on multi-sensor fusion, which comprises the following steps:
the method comprises the following steps: performing two-dimensional planarization on the bird 1:1 simulation model, and establishing six plane image libraries of an upper plane, a lower plane, a left plane, a right plane, a front plane and a rear plane;
step two: preprocessing images in the six plane image libraries, firstly carrying out gray level processing, then carrying out binarization, and then carrying out normalization processing on the obtained images;
step three: simultaneously extracting Hu moment features and Zernike moment features from the image obtained in the second step, and establishing an image feature moment information database;
step four: respectively inputting the calculated image characteristic moment information of the Hu moment and the Zernike moment into two groups of BP neural networks, training the BP neural networks, and storing the trained BP neural networks;
step five: preprocessing bird target images to be recognized acquired by a plurality of sensors, extracting Hu moment features and Zernike moment features, respectively inputting the two kinds of feature moment information into the trained BP neural network in the third step, and recognizing by using a neighbor classifier.
Furthermore, the graying in the second step uses a weighted average method, R, G, B is given different weights according to importance or other indexes, and R, G, B value is setWeighted average, i.e.: r ═ G ═ B ═ (a ═ R2+B*G2+C*B2)/3,
Wherein A, B, C are weights of R, G, B, and A > C > B.
Further, when a is 0.59, C is 0.30, and B is 0.11, that is, R is G, B is (0.30R)2+0.59 G2+0.11 B2) At/3, the most reasonable gray image can be obtained.
Further, the binarization step comprises the steps of dividing the image f (x, y) into an identification target part and a non-identification target part, setting a threshold value K, and dividing the image data into two parts which are more than K and less than K; the threshold K is determined using the maximum variance method.
Further, the normalization first performs a translation transformation on the image,
make its first order geometric moment m10And m01Are all 0, the transformation method is to convert f1(x, y) transformation toI.e. byWhereinAndas centroid coordinates of the image, i.e. x = m10/m00,y=m01 /m00Wherein m is10,m00And m01Respectively the zero order and first order geometrical moments of the original image; then image f1(x, y) is converted to f by scaling2(x, y); the total number of target pixels T is formulated, and the proportion conversion formula is f2(x,y)=fl(x/a, y/a), where a is a scaling factor, a = (T/(m /), m =00+ m10))1/2。
Further, the target images to be recognized acquired by the multiple sensors in the fifth step are six planar images to be recognized, which are subjected to rough selection from the multiple sensors.
Further, in the fifth step, the number of the neighbor nodes of the neighbor classifier is 3; 7 invariant moments are used for the Hu moment, and 12 high-order moments are used for the Zernike moment.
The invention has the beneficial effects that: the method of the invention uses the Hu moment and Zernike moment of the target to be identified in the BP neural network, and combines the neighbor classifier to identify, thereby improving the calculation speed, the identification efficiency of the traditional neighbor classification method and the accuracy of the tiny target.
Drawings
FIG. 1 is a logical block diagram of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
The method for identifying the tiny bird target based on the multi-sensor fusion comprises the following steps: the method comprises the following steps: performing two-dimensional planarization on the bird 1:1 simulation model, and establishing six plane image libraries of an upper plane, a lower plane, a left plane, a right plane, a front plane and a rear plane;
step two: preprocessing images in the six plane image libraries, firstly carrying out gray level processing, then carrying out binarization, and then carrying out normalization processing on the obtained images;
step three: simultaneously extracting Hu moment features and Zernike moment features from the image obtained in the second step, and establishing an image feature moment information database;
step four: respectively inputting the calculated image characteristic moment information of the Hu moment and the Zernike moment into two groups of BP neural networks, training the BP neural networks, and storing the trained BP neural networks;
step five: preprocessing bird target images to be recognized acquired by a plurality of sensors, extracting Hu moment features and Zernike moment features, respectively inputting the two kinds of feature moment information into the trained BP neural network in the third step, and recognizing by using a neighbor classifier.
The graying in the second step is performed by using a weighted average method, R, G, B is given different weights according to importance or other indexes, and R, G, B values are weighted and averaged, that is: r ═ G ═ B ═ (a ═ R2+B*G2+C*B2)/3,
Wherein A, B, C are weights of R, G, B, and A > C > B. When a is 0.59, C is 0.30, and B is 0.11, the most reasonable grayscale image can be obtained.
The binarization step comprises the steps of dividing an image f (x, y) into an identification target part and a non-identification target part, setting a threshold value K, and dividing image data into two parts which are more than K and less than K; the threshold K is determined using the maximum variance method.
The normalization firstly carries out translation transformation on the image to enable the first order of geometric moment m10And m01Are all 0, the transformation method is to convert f1(x, y) transformation toI.e. byWhereinAndas centroid coordinates of the image, i.e. x = m10/m00,y=m01 /m00Wherein m is10,m00And m01Respectively the zero order and first order geometrical moments of the original image; then image f1(x, y) is converted to f by scaling2(x, y); the total number of target pixels T is formulated, and the proportion conversion formula is f2(x,y)=fl(x/a, y/a), where a is a scaling factor, a = (T/(m /), m =00+ m10))1/2。
In the fifth step, the target images to be identified acquired by the multiple sensors are six images to be identified which are subjected to rough selection from the multiple sensors. The number of the adjacent nodes of the adjacent classifier in the fifth step is 3; 7 invariant moments are used for the Hu moment, and 12 high-order moments are used for the Zernike moment.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (7)
1. A bird micro target identification method based on multi-sensor fusion is characterized in that: comprises
Step one, obtaining a sample gallery: performing two-dimensional planarization on the bird 1:1 simulation model, and establishing six plane image libraries of an upper plane, a lower plane, a left plane, a right plane, a front plane and a rear plane;
step two, picture preprocessing: preprocessing images in the six plane image libraries, firstly carrying out gray level processing, then carrying out binarization processing, and then carrying out normalization processing on the obtained images;
step three, establishing a database: simultaneously extracting Hu moment features and Zernike moment features from the image obtained in the second step, and establishing an image feature moment information database;
step four, training a neural network: respectively inputting the calculated image characteristic moment information of the Hu moment and the Zernike moment into two groups of BP neural networks, training the BP neural networks, and storing the trained BP neural networks;
identifying a target to be detected: preprocessing bird target images to be recognized acquired by a plurality of sensors, extracting Hu moment features and Zernike moment features, respectively inputting the two kinds of feature moment information into the trained BP neural network in the third step, and recognizing by using a neighbor classifier.
2. The method for identifying the tiny targets of birds based on multi-sensor fusion as claimed in claim 1, wherein: the graying processing in the second step uses a weighted average method,r, G, B are given different weights and the values of R, G, B are weighted averaged, i.e.: r ═ G ═ B ═ (a ═ R2+B*G2+C*B2) And/3, wherein A, B, C are weights of R, G, B respectively, and A > C > B.
3. The bird small target recognition method based on multi-sensor fusion as claimed in claim 2, wherein when a is 0.56, C is 0.31 and B is 0.13, a gray image is obtained.
4. The method for identifying the tiny targets of birds based on multi-sensor fusion as claimed in claim 1, wherein: the binarization step comprises the steps of dividing an image f (x, y) into an identification target part and a non-identification target part, setting a threshold value K, and dividing image data into two parts which are more than K and less than K; the threshold K is determined using the maximum variance method.
5. The method for identifying the tiny targets of birds based on multi-sensor fusion as claimed in claim 1, wherein: the normalization first translates the image to a first order geometric moment m10And m01Are all 0, the transformation method is to convert f1(x, y) transformation toI.e. f1(x,y)= WhereinAs centroid coordinates of the image, i.e. x = m10/m00,y=m01 /m00Wherein m is10 ,m00And m01Respectively the zero order and first order geometrical moments of the original image; then image f1(x, y) is converted to f by scaling2(x, y); the total number of target pixels T is formulated, and the proportion conversion formula is f2(x,y)=fl(x/a, y/a), where a is a scaling factor, a = (T/(m /), m =00+ m10))1/2。
6. The method for identifying the tiny targets of birds based on multi-sensor fusion as claimed in claim 1, wherein: in the fifth step, the target images to be identified acquired by the multiple sensors are six images to be identified which are subjected to rough selection from the multiple sensors.
7. The method for identifying the tiny targets of birds based on multi-sensor fusion as claimed in claim 1, wherein: the number of the adjacent nodes of the adjacent classifier in the fifth step is 3; 7 invariant moments are used for the Hu moment, and 12 high-order moments are used for the Zernike moment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011554996.2A CN112668454A (en) | 2020-12-25 | 2020-12-25 | Bird micro-target identification method based on multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011554996.2A CN112668454A (en) | 2020-12-25 | 2020-12-25 | Bird micro-target identification method based on multi-sensor fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112668454A true CN112668454A (en) | 2021-04-16 |
Family
ID=75408569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011554996.2A Pending CN112668454A (en) | 2020-12-25 | 2020-12-25 | Bird micro-target identification method based on multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112668454A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984936A (en) * | 2014-05-29 | 2014-08-13 | 中国航空无线电电子研究所 | Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition |
CN104102920A (en) * | 2014-07-15 | 2014-10-15 | 中国科学院合肥物质科学研究院 | Pest image classification method and pest image classification system based on morphological multi-feature fusion |
US20140321718A1 (en) * | 2013-04-24 | 2014-10-30 | Accenture Global Services Limited | Biometric recognition |
CN107330405A (en) * | 2017-06-30 | 2017-11-07 | 上海海事大学 | Remote sensing images Aircraft Target Recognition based on convolutional neural networks |
-
2020
- 2020-12-25 CN CN202011554996.2A patent/CN112668454A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140321718A1 (en) * | 2013-04-24 | 2014-10-30 | Accenture Global Services Limited | Biometric recognition |
CN103984936A (en) * | 2014-05-29 | 2014-08-13 | 中国航空无线电电子研究所 | Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition |
CN104102920A (en) * | 2014-07-15 | 2014-10-15 | 中国科学院合肥物质科学研究院 | Pest image classification method and pest image classification system based on morphological multi-feature fusion |
CN107330405A (en) * | 2017-06-30 | 2017-11-07 | 上海海事大学 | Remote sensing images Aircraft Target Recognition based on convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
简丽琼: "基于Hu矩和Zernike矩的文字识别", 科技信息, no. 17, 15 June 2009 (2009-06-15), pages 454 * |
赵丹丹: "多传感器数据融合在目标识别中的应用研究", 中国优秀硕士学位论文全文数据库信息科技辑, no. 4, 15 April 2008 (2008-04-15), pages 1 * |
邹修国;丁为民;刘德营;赵三琴: "基于4种不变矩和BP神经网络的稻飞虱分类", 农业工程学报, vol. 29, no. 18, 15 September 2013 (2013-09-15), pages 171 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510467B (en) | SAR image target identification method based on depth deformable convolution neural network | |
CN111753828B (en) | Natural scene horizontal character detection method based on deep convolutional neural network | |
CN107480620B (en) | Remote sensing image automatic target identification method based on heterogeneous feature fusion | |
CN104599275B (en) | The RGB-D scene understanding methods of imparametrization based on probability graph model | |
CN106199557B (en) | A kind of airborne laser radar data vegetation extracting method | |
CN108241871A (en) | Laser point cloud and visual fusion data classification method based on multiple features | |
CN110992341A (en) | Segmentation-based airborne LiDAR point cloud building extraction method | |
CN112053426B (en) | Deep learning-based large-scale three-dimensional rivet point cloud extraction method | |
CN108537121B (en) | Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion | |
CN104090972A (en) | Image feature extraction and similarity measurement method used for three-dimensional city model retrieval | |
CN111968171A (en) | Aircraft oil quantity measuring method and system based on artificial intelligence | |
CN109543595A (en) | The training method and detection method of the electric wire of convolutional neural networks are separated based on depth | |
CN111898621A (en) | Outline shape recognition method | |
CN108230313B (en) | SAR image target detection method based on component adaptive selection and discrimination model | |
CN106895824A (en) | Unmanned plane localization method based on computer vision | |
CN107145850A (en) | A kind of target identification method based on sparseness filtering feature selecting | |
Liu et al. | Development of a machine vision algorithm for recognition of peach fruit in a natural scene | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network | |
CN111259736B (en) | Real-time pedestrian detection method based on deep learning in complex environment | |
CN113569644B (en) | Airport bird target detection method based on machine vision | |
CN110458064B (en) | Low-altitude target detection and identification method combining data driving type and knowledge driving type | |
CN106778897B (en) | Plant species recognition methods twice based on COS distance and center profile distance | |
CN112668454A (en) | Bird micro-target identification method based on multi-sensor fusion | |
CN117152706A (en) | Aircraft runway accumulated water identification method, device and system | |
CN109389152B (en) | Refined identification method for power transmission line falling object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |