CN110414436B - Airport weather video auxiliary observation system - Google Patents

Airport weather video auxiliary observation system Download PDF

Info

Publication number
CN110414436B
CN110414436B CN201910694122.8A CN201910694122A CN110414436B CN 110414436 B CN110414436 B CN 110414436B CN 201910694122 A CN201910694122 A CN 201910694122A CN 110414436 B CN110414436 B CN 110414436B
Authority
CN
China
Prior art keywords
weather
video
image
airport
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910694122.8A
Other languages
Chinese (zh)
Other versions
CN110414436A (en
Inventor
刘黎
胡艳红
景颖
张道永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Air Traffic Management Technology Co ltd
Original Assignee
Zhengzhou Air Traffic Management Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Air Traffic Management Technology Co ltd filed Critical Zhengzhou Air Traffic Management Technology Co ltd
Priority to CN201910694122.8A priority Critical patent/CN110414436B/en
Publication of CN110414436A publication Critical patent/CN110414436A/en
Application granted granted Critical
Publication of CN110414436B publication Critical patent/CN110414436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses an airport weather video auxiliary observation system, which relates to the field of video auxiliary observation and comprises the following steps of: A. the camera carries out video acquisition on weather, and a section of weather video image signals are obtained at intervals and transmitted to the analysis platform; B. b, the analysis platform performs median filtering on the weather image signals in the step A to eliminate noise points; C. b, extracting a key frame from the filtered video stream at intervals in the step B, and extracting HOG characteristics of the key frame; D. c, identifying the HOG characteristics obtained in the step C by using an ADABOOST classifier to obtain a weather characteristic list; E. voting the weather feature times of the classification result, and taking the weather category with the largest occurrence times as an observation result; and transmits the situation to the display terminal. The invention observes the weather under the analysis of a large amount of video data, and can play a role in auxiliary prediction on the weather in a small range according to the observed data.

Description

Airport weather video auxiliary observation system
Technical Field
The invention relates to the field of video auxiliary observation, in particular to an airport weather video auxiliary observation system.
Background
At present, the observation method of civil aviation weather in a certain airport range comprises automatic observation equipment, weather radar, satellite cloud pictures and the like. And the basis for the observation staff to compile and send the observation messages participating in the international exchange is that the code messages with fixed format are formed for the weather conditions at the time after the results formed by the element data of the automatic observation equipment are further compared only by the actual observation of the staff on the observation platform.
This working method is every time or every half time and needs the staff to walk to observe the condition of whole airport outside the observation platform, extravagant manpower and materials. Moreover, the subjectivity of observation of the working personnel is high, and the reference equipment only has data provided by the automatic observation equipment, so that the reference content is limited.
Disclosure of Invention
The invention aims to: an airport weather video aided observation system is provided that solves the problems noted in the background above.
The technical scheme adopted by the invention is as follows:
an airport weather video auxiliary observation system comprises a plurality of cameras, an analysis platform and a display terminal, wherein the cameras are uniformly arranged around an airport.
An airport weather video auxiliary observation method comprises the following steps of:
A. the camera carries out video acquisition on weather, and a section of weather video image signals are obtained at intervals and transmitted to the analysis platform;
B. b, the analysis platform performs median filtering on the weather image signal in the step A to eliminate noise points;
C. b, extracting a key frame from the filtered video stream at intervals of a period of time, and extracting HOG characteristics from the key frame;
D. c, identifying the HOG characteristics obtained in the step C by using an ADABOOST classifier to obtain a weather characteristic list;
E. voting the weather feature times of the classification result, and taking the weather category with the most occurrence times as an observation result; and transmits the situation to the display terminal.
In the prior art, after the results formed by the element data of the automatic observation equipment are further compared only by the actual observation of personnel on an observation platform, the current weather condition forms a code message with a fixed format for coding and transmitting. This working method is every time or every half time and needs the staff to walk to observe the condition of whole airport outside the observation platform, extravagant manpower and materials. Moreover, the subjectivity of observation of the staff is high, and the reference equipment only has data provided by the automatic observation equipment, so that the reference content is limited.
Further, the weather video image signal of the step a is a YUV video stream.
Further, the step B median filtering includes the steps of:
b1, scanning pixel points in the image one by one;
b2, sorting the pixel values of all elements in the range of x and y in the neighborhood from small to large;
and B3, assigning the obtained intermediate value to the current pixel point.
In the step B2, both x and y are odd numbers.
Further, the HOG feature extraction in step C includes the following steps:
c1: graying a target graphic window, and regarding an image as a three-dimensional image with the gray levels of x, y and z;
c2: carrying out color space standardization on an input image by adopting a Gamma correction method;
c3: calculating the gradient of each pixel of the image;
c4: dividing the image into n x n cell units;
c5: counting the gradient histogram of each cell unit to form a feature descriptor of each cell unit;
c6: forming a collection area by every several cell units, and connecting the feature descriptors of all the cell units in one collection area in series to obtain the HOG feature descriptor of the collection area;
c7: and connecting the HOG characteristic descriptors of all the collection areas in the target graphic window in series to obtain the HOG characteristic descriptor of the target graphic window.
Further, the generation of the ADABOOST classifier in step D includes the following steps:
d1, inputting a weather video sample;
d2, performing ADABOOST training;
d3, obtaining ADABOOST classifier parameters.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the airport weather video auxiliary observation system provided by the invention adopts HOG feature extraction and an ADABOOST algorithm to analyze and observe the weather of an airport, and under the analysis of a large amount of video data, an operator can further predict the development trend of the weather according to the weather change conditions in video and observation equipment, thereby playing an auxiliary prediction role on the weather in a small range.
2. The invention relates to an airport weather video auxiliary observation system.A YUV video stream acquired by a camera allows the bandwidth of chromaticity to be reduced by considering the human perception capability when the video is coded.
3. According to the airport weather video auxiliary observation system, the collected original video stream generally contains noise points, so that a median filtering algorithm is adopted to eliminate partial noise points, the median filtering algorithm is simple and efficient, and the requirement on a CPU is not high.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the operation of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations where mutually exclusive features or steps are mutually exclusive.
The present invention will be described in detail with reference to fig. 1.
Example 1
The technical scheme adopted by the invention is as follows:
an airport weather video auxiliary observation system comprises a plurality of cameras, an analysis platform and a display terminal, wherein the cameras are uniformly arranged around an airport.
An airport weather video auxiliary observation method comprises the following steps of:
A. the camera carries out video acquisition on weather, and a section of weather video image signals are obtained at intervals and transmitted to the analysis platform;
B. b, the analysis platform performs median filtering on the weather image signal in the step A to eliminate noise points;
C. b, extracting a key frame from the filtered video stream at intervals of a period of time, and extracting HOG characteristics from the key frame;
D. identifying the features obtained in the step C by using an ADABOOST classifier to obtain a weather feature list;
E. voting the weather feature times of the classification result, and taking the weather category with the most occurrence times as an observation result; and transmits the situation to the display terminal.
The key frame extraction step in step C is as follows:
h = { H) for image sequence 1 ,H 2 ,...,H t ,...H T },H t For the T frame image in H, T is the length of H, and the average value of all pixels at the same pixel position is taken as the value of the key frame at the position, namely
Figure BDA0002148811280000031
In the formula H t (i, J) is the pixel value of the T frame image in the image sequence H at coordinate (i, J), T is the length of the image sequence, and J (i, J) is the pixel value of the key frame at coordinate (i, J).
The process of HOG feature extraction in step C is as follows:
c1: graying a target graphic window, and taking an image as a three-dimensional image with the gray levels of x, y and z;
c2: standardizing the color space of the input image by using a Gamma correction method;
c3: calculating the gradient of each pixel of the image;
the implementation of the gradient:
firstly, convolution operation is carried out on an original image by using [ -1,0,1] gradient operator to obtain gradient component gradscalx in the x direction (horizontal direction, right direction is positive direction), and then convolution operation is carried out on the original image by using [1,0, -1] T gradient operator to obtain gradient component gradscaly in the y direction (vertical direction, upward direction is positive direction). Then, the gradient size and direction of the pixel point are calculated by the following formula.
G x (x,y)=H(x+1,y)-H(x-1,y)
G y (x,y)=H(x,y+1)-H(x,y-1)
In the formula G x (x,y),G y (x, y), and H (x, y) respectively represent a horizontal direction gradient, a vertical direction gradient, and a pixel value at a pixel point (x, y) in the input image. The gradient amplitude and gradient direction at pixel point (x, y) are respectively:
Figure BDA0002148811280000041
Figure BDA0002148811280000042
c4: dividing the image into n x n cell units;
c5: counting the gradient histogram of each cell unit to form a feature descriptor of each cell unit;
c6: forming a collection area by every several cell units, and connecting the feature descriptors of all the cell units in one collection area in series to obtain the HOG feature descriptor of the collection area;
c7: and connecting the HOG feature descriptors of all the collection areas in the target graphic window in series to obtain the HOG feature descriptor of the target graphic window.
The classifier generation process for ADABOOST is as follows:
assume that a weather picture sample is T = { (x 1, y 1), (x 2, y 2) \ 8230; (xN, yN) }, where x is i E.g. x, and
Figure BDA0002148811280000043
yi belongs to the set of labels { -1, +1}.
First, weight distribution of training data is initialized. Each training sample is initially given the same weight: 1/N.
Figure BDA0002148811280000044
Then, multiple rounds of iterations are performed, with M =1,2.
Using a weight distribution D m Learning the training data set to obtain a basic classifier, and selecting a threshold value with the lowest error rate to design the basic classifier:
G m (x):χ→{-1,1}
calculation of G m (x) Classification error rate on training data set
Figure BDA0002148811280000051
Calculation of G m (x) Coefficient of (a) m Represents G m (x) The importance degree in the final classifier is obtained, so that the weight of the basic classifier in the final classifier is obtained
Figure BDA0002148811280000052
From the above formula, e m em < =1/2, α m > =0, and α m With e m Is increased, means that the basic classifier with a smaller classification error rate has a greater effect in the final classifier.
d updating the weight distribution of the training data set (for the purpose of obtaining a new weight distribution of the samples) for the next iteration
D m+1 =(w m+1,1 ,w m+1,2 ,...w m+1,i ...,w m+1,N ),
Figure BDA0002148811280000053
So as to be classified by the basic classifier G m (x) The weight of misclassified samples increases and the weight of correctly classified samples decreases. Where Zm is a normalization factor, making Dm +1 a probability distribution:
Figure BDA0002148811280000054
each weak classifier is combined
Figure BDA0002148811280000055
The final weather classifier is thus obtained as follows:
Figure BDA0002148811280000056
example 2
In this embodiment, on the basis of embodiment 1, the step B of median filtering includes the following steps:
b1, scanning pixel points in the image one by one;
b2, sorting the pixel values of all elements in the range of x and y in the neighborhood from small to large;
and B3, assigning the obtained intermediate value to the current pixel point.
The values of x and y are odd numbers, and the preferred value is 3. Is realized by the following procedures
The main codes are as follows:
Figure BDA0002148811280000057
Figure BDA0002148811280000061
the original video stream collected generally contains noise, and a median filtering algorithm is adopted to eliminate part of the noise. The median filtering algorithm is simple and efficient, and has low requirements on a CPU.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be made by those skilled in the art without inventive work within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope defined by the claims.

Claims (4)

1. An airport weather video auxiliary observation method is characterized by comprising the following steps: comprises the following steps which are carried out in sequence:
A. the camera carries out video acquisition on weather, and a section of weather video image signals are obtained at intervals and transmitted to the analysis platform;
B. b, the analysis platform performs median filtering on the weather image signals in the step A to eliminate noise points;
C. b, extracting a key frame from the filtered video stream at intervals of a period of time, and extracting HOG characteristics from the key frame;
D. identifying the features obtained in the step C by using an ADABOOST classifier to obtain a weather feature list;
E. voting the weather feature times of the classification result, and taking the weather category with the most occurrence times as an observation result; and transmitting the condition to a display terminal;
the HOG feature extraction in the step C comprises the following steps:
c1: graying a target graphic window, and regarding an image as a three-dimensional image with the gray levels of x, y and z;
c2: standardizing the color space of the input image by using a Gamma correction method;
c3: calculating the gradient of each pixel of the image;
c4: dividing the image into n x n cell units;
c5: counting the gradient histogram of each cell unit to form a feature descriptor of each cell unit;
c6: forming a collection area by every several cell units, and connecting the feature descriptors of all the cell units in one collection area in series to obtain the HOG feature descriptor of the collection area;
c7: and connecting the HOG feature descriptors of all the collection areas in the target graphic window in series to obtain the HOG feature descriptor of the target graphic window.
2. The airport weather video aided observation method as claimed in claim 1, wherein: and B, the weather video image signal of the step A is a YUV video stream.
3. The airport weather video aided observation method as claimed in claim 1, wherein: the step B median filtering comprises the following steps:
b1, scanning pixel points in the image one by one;
b2, sorting the pixel values of all elements in the range of x and y in the neighborhood from small to large;
b3, assigning the obtained intermediate value to the current pixel point;
in the step B2, both x and y are odd numbers.
4. The airport weather video auxiliary observation method according to claim 1, wherein:
the generation of the ADABOOST classifier in the step D comprises the following steps:
d1, inputting a weather video sample;
d2, performing ADABOOST training;
d3, obtaining ADABOOST classifier parameters.
CN201910694122.8A 2019-07-30 2019-07-30 Airport weather video auxiliary observation system Active CN110414436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910694122.8A CN110414436B (en) 2019-07-30 2019-07-30 Airport weather video auxiliary observation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910694122.8A CN110414436B (en) 2019-07-30 2019-07-30 Airport weather video auxiliary observation system

Publications (2)

Publication Number Publication Date
CN110414436A CN110414436A (en) 2019-11-05
CN110414436B true CN110414436B (en) 2023-01-10

Family

ID=68364194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910694122.8A Active CN110414436B (en) 2019-07-30 2019-07-30 Airport weather video auxiliary observation system

Country Status (1)

Country Link
CN (1) CN110414436B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463196A (en) * 2014-11-11 2015-03-25 中国人民解放军理工大学 Video-based weather phenomenon recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834912B (en) * 2015-05-14 2017-12-22 北京邮电大学 A kind of weather recognition methods and device based on image information detection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463196A (en) * 2014-11-11 2015-03-25 中国人民解放军理工大学 Video-based weather phenomenon recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于室外图像的天气现象识别方法;李骞等;《计算机应用》;20110601(第06期);全文 *

Also Published As

Publication number Publication date
CN110414436A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN104063883B (en) A kind of monitor video abstraction generating method being combined based on object and key frame
CN111079655B (en) Method for recognizing human body behaviors in video based on fusion neural network
DE112018006337T5 (en) Method and system for classifying an object of interest using an artificial neural network
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN111507275B (en) Video data time sequence information extraction method and device based on deep learning
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN110070091A (en) The semantic segmentation method and system rebuild based on dynamic interpolation understood for streetscape
CN112258525B (en) Image abundance statistics and population identification algorithm based on bird high-frame frequency sequence
CN104820995A (en) Large public place-oriented people stream density monitoring and early warning method
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN112131927A (en) Sow delivery time prediction system based on posture transformation characteristics in later gestation period
CN108345835B (en) Target identification method based on compound eye imitation perception
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
CN115620178A (en) Real-time detection method for abnormal and dangerous behaviors of power grid of unmanned aerial vehicle
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN110472567A (en) A kind of face identification method and system suitable under non-cooperation scene
CN110414436B (en) Airport weather video auxiliary observation system
CN117152670A (en) Behavior recognition method and system based on artificial intelligence
CN110135274B (en) Face recognition-based people flow statistics method
CN111104965A (en) Vehicle target identification method and device
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN112532938B (en) Video monitoring system based on big data technology
CN114120056A (en) Small target identification method, small target identification device, electronic equipment, medium and product
CN111626102B (en) Bimodal iterative denoising anomaly detection method and terminal based on video weak marker
CN113888397A (en) Tobacco pond cleaning and plant counting method based on unmanned aerial vehicle remote sensing and image processing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant