CN117274862B - Airport ground service vehicle state detection method and system based on convolutional neural network - Google Patents

Airport ground service vehicle state detection method and system based on convolutional neural network Download PDF

Info

Publication number
CN117274862B
CN117274862B CN202311219844.0A CN202311219844A CN117274862B CN 117274862 B CN117274862 B CN 117274862B CN 202311219844 A CN202311219844 A CN 202311219844A CN 117274862 B CN117274862 B CN 117274862B
Authority
CN
China
Prior art keywords
information
vehicle
airport
ground service
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311219844.0A
Other languages
Chinese (zh)
Other versions
CN117274862A (en
Inventor
侯晓慧
彭建军
金翔
李再刚
扈成双
刘占有
焦星云
刘凌风
马建瑞
钟金华
李林洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Airport Group Ltd
Zhonghang Electricity System Engineering Co ltd
China Design Group Beijing Civil Aviation Design And Research Institute Co ltd
Original Assignee
Chongqing Airport Group Ltd
Zhonghang Electricity System Engineering Co ltd
China Design Group Beijing Civil Aviation Design And Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Airport Group Ltd, Zhonghang Electricity System Engineering Co ltd, China Design Group Beijing Civil Aviation Design And Research Institute Co ltd filed Critical Chongqing Airport Group Ltd
Priority to CN202311219844.0A priority Critical patent/CN117274862B/en
Publication of CN117274862A publication Critical patent/CN117274862A/en
Application granted granted Critical
Publication of CN117274862B publication Critical patent/CN117274862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an airport ground service vehicle state detection method, an airport ground service vehicle state detection system, a calculation device and a storage medium based on a convolutional neural network. According to the technical scheme provided by the invention, image sample data sets of a plurality of types of ground service vehicles are obtained, and a GoogLeNet model is trained according to the image sample data sets to obtain a convolutional neural network model; collecting video image information of a vehicle to be tested, inputting the video image information into a convolutional neural network model, and obtaining identification result information; and calculating based on the identification result information, the acquired airport facility information and the acquired vehicle-mounted navigation information to obtain the state information of the vehicle to be detected, wherein the state information at least comprises the position information and the movement speed information of the airport ground service vehicle. The invention can obtain the neural network model suitable for the ground service vehicles through training, identify the video images according to the neural network model, and obtain accurate vehicle states by combining airport facility information and vehicle navigation information through the identification results, thereby assisting the airport to guarantee safe and efficient operation.

Description

Airport ground service vehicle state detection method and system based on convolutional neural network
Technical Field
The invention relates to the field of intelligent video analysis, in particular to an airport ground service vehicle state detection method, system, computing equipment and computer storage medium based on a convolutional neural network.
Background
With the rapid development of socioeconomic performance, the air transportation industry is also becoming increasingly important as a main mode of transportation. In each airport, due to the increasing number of flights, there are also more and more corresponding ground assurance vehicles. In order to ensure that the ground service guarantee system for the flights can rapidly, accurately and safely finish the guarantee works such as leading vehicles in place, entering airplanes, filling vehicles in place and the like, the information of the flight guarantee nodes of the ground service vehicles of the airports needs to be collected. The traditional acquisition mode mainly depends on manual writing or portable equipment input and the like, but the mode needs airport guarantee operation personnel to feed back by themselves or carry out information statistics and report through professionals, and has the advantages of low information update speed and high error rate, and seriously influences the efficiency and safety of airport flight transportation.
Based on this, some researches have been started for an airport flight assurance node automatic acquisition system. The invention patent CN112349150B, a video acquisition method and a system of airport flight guarantee time node, proposes to use a Caffe frame to acquire information on the real-time state of an airplane by machine learning; the invention patent CN 112990683A-early warning method of flight guarantee flow node and related equipment proposes a method for processing abnormality of the guarantee flow node from the aspect of flight guarantee node rule. However, the above solutions do not consider frequent movements of ground service vehicles in the airport, and lack knowledge of the ground service vehicle operation information, which may not only generate a certain risk for the operation of the airport, but also cannot further assist in smooth flight support work based on the real-time status of the ground service vehicles.
Disclosure of Invention
In order to solve the technical problems, the invention provides an airport ground service vehicle state detection method based on a convolutional neural network, and a corresponding airport ground service vehicle state detection system, a computing device and a computer storage medium based on the convolutional neural network.
According to one aspect of the present invention, there is provided a method for detecting the status of an airport ground service vehicle based on a convolutional neural network, the method comprising:
acquiring image sample data sets of a plurality of types of ground service vehicles, and training GoogLeNet models according to the image sample data sets to obtain convolutional neural network models suitable for airport ground service vehicles;
Collecting video image information of a vehicle to be tested, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested;
calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle.
In the above aspect, the acquiring image sample data sets of a plurality of kinds of ground service vehicles further includes:
acquiring images of different types of ground service vehicles in four directions, namely the front face, the back face, the left side face and the right side face, and generating an image sample data set of the ground service vehicles;
classifying images in the image sample data set according to the photographing direction;
the image sample data in the image sample data set is marked according to the kind of the ground service vehicle.
In the above scheme, training the GoogLeNet model according to the image sample dataset to obtain a convolutional neural network model suitable for airport ground service vehicles, further includes:
preprocessing an image sample dataset; the preprocessing at least comprises the steps of adding a first preset proportion of images in an image sample data set into salt and pepper noise with a mean value of 0 and a variance of 1; extracting a second preset proportion of images from the images of each ground service vehicle to zoom in or zoom out;
Based on the loss function, improving the GoogLeNet model to obtain a target GoogLeNet model;
and performing supervised training on the improved target GoogLeNet model based on the image sample dataset to obtain a convolutional neural network model suitable for airport ground service vehicles.
In the above solution, the improvement of GoogLeNet model based on the loss function to obtain the target GoogLeNet model further includes:
Modifying the loss function in the GoogLeNet model, wherein the modified loss function is in the form of:
Where λ coord denotes the weight of the boundary frame coordinate prediction loss, S 2 denotes the number of meshes of the image division, B denotes the number of predicted boundary frames in each mesh, i denotes the ith mesh, j denotes the jth boundary frame in the mesh, Indicating whether the ith grid contains an object, wherein the value is 1 when the ith grid contains the object, and the value is 0 when the ith grid does not contain the object; x i represents the true value of the abscissa of the coordinates of the central point of the bounding box of the object,/>An abscissa estimate representing the coordinates of the center point of the object bounding box, y i representing the ordinate true value of the coordinates of the center point of the object bounding box,/>An ordinate estimate representing the coordinates of the center point of the bounding box of the object, w i representing the true value of the width of the bounding box body,/>An estimate representing the width of the bounding box, h i representing the true value of the bounding box height,/>Representing an estimate of the bounding box body height, and x i,/>yi、/>wi、/>hi、/>Normalization is carried out; c i represents the confidence level of the bounding box body, true value,/>A confidence estimate representing a bounding box; lambda noobj represents the weight of confidence prediction loss for bounding boxes that do not contain an object,/>Indicating whether the ith grid does not contain an object, wherein the value is 1 when the ith grid does not contain an object, and the value is 0 when the ith grid contains an object; c represents the object class, p i (c) represents the code value corresponding to the class to which the grid belongs,/>And representing the coding estimation value corresponding to the category to which the grid belongs.
In the scheme, the video image information of the vehicle to be detected is collected, and is input into a convolutional neural network model for prediction and identification, so that identification result information is obtained; acquiring airport facility information and vehicle navigation information of a vehicle to be tested, and further comprising:
the method comprises the steps that video image information acquisition is carried out on a vehicle to be tested through a fixed-position camera; wherein, the fixed-position camera is fixed on airport facilities;
acquiring airport facility information, and calibrating fixed-position cameras corresponding to airport facilities according to the airport facility information; the airport facility information at least comprises at least three fixed characteristic information of the airport facility and characteristic position information corresponding to the fixed characteristic;
The identification result information at least comprises vehicle type information and vehicle two-dimensional coordinates; the vehicle navigation information includes at least time information and navigation position information.
In the above scheme, the calibrating the fixed-position camera corresponding to the airport facility according to the airport facility information further includes:
Acquiring camera parameters, including at least: focal length, principal coordinate point, and pixel size information;
Constructing a projection matrix of the camera by using the camera parameters;
based on the feature position information corresponding to the fixed feature information, the projection matrix is utilized to calculate corresponding feature two-dimensional coordinates according to the feature three-dimensional coordinates of the feature position information.
In the above scheme, the vehicle state information to be detected is obtained by calculating based on the identification result information, the airport facility information and the vehicle navigation information; the vehicle state information to be detected at least comprises vehicle type information, target position information and movement speed information of the airport ground service vehicle, and further comprises:
based on the characteristic two-dimensional coordinates obtained by calculation of airport facility information and the vehicle two-dimensional coordinates in the identification result information, reversely deriving projection position information corresponding to the vehicle to be detected by utilizing a projection matrix;
matching the navigation position information in the vehicle navigation information with the projection position information to obtain high-precision time and target position information;
And based on the video image information and the vehicle navigation information, fusion estimation of the motion speed information is performed by using an extended Kalman filtering method.
In the above scheme, the method further comprises:
setting corresponding electronic fences at airports aiming at different types of ground service vehicles;
Acquiring corresponding coordinate point data of the edge of the electronic fence according to the state information of the vehicle to be detected;
Judging whether the vehicle to be tested is positioned in the range of the coordinate point of the edge of the electronic fence according to the state information of the vehicle to be tested;
And judging whether the vehicle to be tested crosses the boundary or not and whether an alarm is required or not according to the position relation between the target position information of the vehicle to be tested and the electronic fence and the type of the electronic fence.
In the above scheme, the method further comprises:
determining vehicle type information of the vehicle to be detected according to the identification result information of the vehicle to be detected;
determining the direction of the vehicle to be tested and the front, back, left side and right side of the vehicle to be tested according to the movement speed information of the vehicle to be tested;
And comparing the video image information of the vehicle to be tested with the corresponding direction image of the corresponding vehicle type in the database, and judging whether an alarm is required according to the comparison result.
According to another aspect of the present invention, there is provided an airport ground service vehicle state detection system based on a convolutional neural network, comprising: the system comprises a model training module, an identification module and a calculation module; wherein,
The model training module is used for acquiring image sample data sets of a plurality of types of ground service vehicles, and training GoogLeNet models according to the image sample data sets to obtain convolutional neural network models suitable for the airport ground service vehicles;
The recognition module is used for collecting video image information of the vehicle to be detected, inputting a convolutional neural network model for prediction recognition, and obtaining recognition result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested;
the computing module is used for computing based on the identification result information, the airport facility information and the vehicle navigation information to obtain the state information of the vehicle to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle.
According to the technical scheme provided by the invention, image sample data sets of a plurality of types of ground service vehicles are obtained, and a GoogLeNet model is trained according to the image sample data sets to obtain a convolutional neural network model suitable for the airport ground service vehicles; collecting video image information of a vehicle to be tested, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested; calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle. Through the acquired ground service vehicle image sample, the GoogLeNet model is purposefully improved and trained to obtain a convolutional neural network model suitable for the airport ground service vehicle; based on the convolutional neural network model, the type of the target vehicle can be accurately identified; meanwhile, based on the corresponding relation between the fixed camera and the airport facilities, the calibration of the camera is realized according to the fixed characteristics of the airport facilities, the projection matrix is utilized for projection and back projection, the approximate position of the vehicle is rapidly determined from the ground service vehicle image to be detected acquired by the camera, then the approximate position is matched with the information acquired by the vehicle-mounted navigation system, the accurate time and position information is accurately calculated, the vehicle running speed is further calculated, the type, the position and the running speed of the vehicle are finally obtained, so that airport staff can know the related information of the ground service vehicle more rapidly and accurately, the normal running of the guarantee work of the airport is assisted, and the efficiency and the safety of the guarantee work are greatly improved; meanwhile, by arranging electronic fences aiming at different types of ground service vehicles, vehicles are prevented from entering an area possibly causing danger, and the safety of the integral operation of the airport is effectively improved by limiting the operation area of various vehicles in the airport; by detecting the appearance of the vehicle, the operator is prompted that the related vehicle may have an operation failure or that a component drop may occur, thereby further reducing the risk that may be caused to the overall operation of the airport.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 shows a flow diagram of a method for airport ground service vehicle state detection based on convolutional neural network, in accordance with one embodiment of the present invention;
FIG. 2 illustrates a fixed-position camera calibration method based on airport facility information, according to one embodiment of the invention;
FIG. 3 is a schematic diagram showing the principle of determining projection position information of a vehicle to be tested;
FIG. 4 illustrates a flow diagram of a convolutional neural network model training and generation method suitable for use with airport ground service vehicles, in accordance with one embodiment of the present invention;
FIG. 5 shows a schematic flow diagram of image sample dataset preprocessing according to one embodiment of the present invention;
FIG. 6 shows a flow diagram of an electronic fence based out-of-range alert method in accordance with one embodiment of the present invention;
FIG. 7 illustrates a flow diagram of a ground vehicle appearance based detection alarm method in accordance with one embodiment of the present invention;
FIG. 8 illustrates a block diagram of an airport ground service vehicle state detection system based on a convolutional neural network, in accordance with one embodiment of the present invention;
FIG. 9 shows a schematic flow diagram of an airport ground service vehicle state detection system based on a convolutional neural network in accordance with another embodiment of the invention;
FIG. 10 illustrates a schematic diagram of a computing device, according to an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
FIG. 1 shows a flow diagram of a convolutional neural network-based airport ground service vehicle state detection method, which includes the steps of:
step S101, acquiring image sample data sets of a plurality of types of ground service vehicles, and training GoogLeNet models according to the image sample data sets to obtain a convolutional neural network model suitable for the airport ground service vehicles.
Specifically, the acquiring the image sample data sets of the plurality of kinds of ground service vehicles further includes:
acquiring images of different types of ground service vehicles in four directions, namely the front face, the back face, the left side face and the right side face, and generating an image sample data set of the ground service vehicles;
classifying images in the image sample data set according to the photographing direction;
the image sample data in the image sample data set is marked according to the kind of the ground service vehicle.
Preferably, the kinds of the ground service vehicles may include: the system comprises a guide vehicle, an oiling vehicle, a power supply vehicle, an air source vehicle, an air conditioner vehicle, a clear water vehicle, a sewage vehicle, a garbage vehicle, a conveyor belt vehicle, a towing vehicle, a traction vehicle and a ferry vehicle; in addition, new ground service vehicle types can be added; the present invention is not limited to a specific type of ground service vehicle.
Preferably, after the collection of images in the four directions of the front, back, left and right sides of different types of ground service vehicles, the different ground service vehicle types are marked with numbers, for example, the images of the lead vehicles are marked with a number 1, the images of the ferry vehicles are marked with a number 2, the images of the fuelling vehicles are marked with a number 3, and so on.
Step S102, acquiring video image information of a vehicle to be detected, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; and acquiring airport facility information and vehicle navigation information of the vehicle to be tested.
Specifically, video image information acquisition is carried out on the vehicle to be tested through a fixed-position camera; wherein, the fixed-position camera is fixed on airport facilities;
acquiring airport facility information, and calibrating fixed-position cameras corresponding to airport facilities according to the airport facility information; the airport facility information at least comprises at least three fixed characteristic information of the airport facility and characteristic position information corresponding to the fixed characteristic;
The identification result information at least comprises vehicle type information and vehicle two-dimensional coordinates; the vehicle navigation information includes at least time information and navigation position information.
Specifically, the calibration of the fixed-position camera corresponding to the airport facility according to the airport facility information is shown in fig. 2, and fig. 2 shows a fixed-position camera calibration method based on the airport facility information according to an embodiment of the present invention, where the method includes the following steps:
step S201, obtaining camera parameters, at least including: focal length, principal coordinate point, and pixel size information.
Step S202, a projection matrix of the camera is constructed by using camera parameters.
Preferably, based on the camera parameters, a mapping of spatial points to image points is determined, and a projection matrix is constructed, which may be a3×4 matrix.
Step S203, based on the feature position information corresponding to the fixed feature information, the corresponding feature two-dimensional coordinates are calculated by projection according to the feature three-dimensional coordinates of the feature position information by using a projection matrix.
Preferably, the computer vision method is utilized to project the feature three-dimensional coordinates of the feature position information of at least three airport fixed features to obtain corresponding feature two-dimensional coordinates; among these airport fixed features are fixed buildings or fixed landmarks etc. in the airport, for example, certain gates of certain terminal buildings, fixed parking lots of certain types of ground vehicles, towers etc.
Step S103, calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle.
Specifically, based on the characteristic two-dimensional coordinates obtained by calculation of airport facility information and the vehicle two-dimensional coordinates in the identification result information, reversely deriving projection position information corresponding to the vehicle to be detected by utilizing a projection matrix;
matching the navigation position information in the vehicle navigation information with the projection position information to obtain high-precision time and target position information;
And based on the video image information and the vehicle navigation information, fusion estimation of the motion speed information is performed by using an extended Kalman filtering method.
Preferably, the projection matrix is utilized to reversely derive the projection position information corresponding to the vehicle to be detected based on the characteristic two-dimensional coordinates calculated by the airport facility information and the vehicle two-dimensional coordinates in the recognition result information, as shown in fig. 3, and fig. 3 shows a schematic diagram of the principle of determining the projection position information of the vehicle to be detected. In the figure, the calibration points 1,2 and 3 are feature positions of three airport fixed features, and the coordinates of the calibration points are feature two-dimensional coordinates corresponding to feature three-dimensional coordinates of each feature position respectively; the calibration point X is the position of the vehicle to be detected in the image, and the coordinates of the calibration point X are the two-dimensional coordinates of the vehicle in the identification result information; f is the projection imaging plane.
Preferably, based on the obtained high definition time and target position information,
According to the airport ground service vehicle state detection method based on the convolutional neural network, provided by the embodiment, image sample data sets of a plurality of types of ground service vehicles are obtained, and a GoogLeNet model is trained according to the image sample data sets, so that a convolutional neural network model suitable for the airport ground service vehicles is obtained; collecting video image information of a vehicle to be tested, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested; calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle. Through the acquired ground service vehicle image sample, the GoogLeNet model is purposefully improved and trained to obtain a convolutional neural network model suitable for the airport ground service vehicle; based on the convolutional neural network model, the type of the target vehicle can be accurately identified; meanwhile, based on the corresponding relation between the fixed camera and the airport facilities, the calibration of the camera is realized according to the fixed characteristics of the airport facilities, the projection matrix is utilized for projection and back projection, the approximate position of the vehicle is rapidly determined from the ground service vehicle image to be detected acquired by the camera, then the approximate position is matched with the information acquired by the vehicle-mounted navigation system, the accurate time and position information is accurately calculated, the vehicle running speed is further calculated, and finally the type, the position and the running speed of the vehicle are obtained, so that airport staff can know the related information of the ground service vehicle more rapidly and accurately, the normal running of the guarantee work of the airport is assisted, and the efficiency and the safety of the guarantee work are greatly improved.
FIG. 4 shows a flow diagram of a convolutional neural network model training and generation method for airport ground service vehicles, in accordance with one embodiment of the present invention, as shown in FIG. 4, comprising the steps of:
Step S401, preprocessing an image sample data set; the preprocessing at least comprises the steps of adding a first preset proportion of images in an image sample data set into salt and pepper noise with a mean value of 0 and a variance of 1; and extracting a second preset proportion of images from the images of each ground service vehicle to zoom in or zoom out.
Preferably, the first preset ratio may be 1/4; the second preset ratio may be 1/2;
The extracted image can be enlarged or reduced with the random number in [0-1.5] as the scale.
Preferably, the preprocessing is performed on the image of each ground service vehicle, as shown in fig. 5, and fig. 5 shows a schematic flow chart of the preprocessing of the image sample data set according to one embodiment of the present invention. For each ground service vehicle, 1/4 of the image is extracted from the collected image data set of the ground service vehicle, the image is added with salt and pepper noise and then is put back into the data set, and 1/2 of the image is randomly amplified or reduced and then is put back into the data set.
Step S402, based on the loss function, the GoogLeNet model is improved, and a target GoogLeNet model is obtained.
Specifically, in order to overcome the inequality of classification information dimension and acquisition frame dimension in GoogLeNet model, introducing scale factors into a loss function to carry out larger weighting on classification probability and errors of b-boxes (Bounding Box, boundary boxes), and giving smaller weighting to grids without objects; at the same time, the penalty function is modified using the square root of the wide-high of the bounding box as a penalty term. The modified loss function is in the form of:
Where λ coord denotes the weight of the boundary frame coordinate prediction loss, S 2 denotes the number of meshes of the image division, B denotes the number of predicted boundary frames in each mesh, i denotes the ith mesh, j denotes the jth boundary frame in the mesh, Indicating whether the ith grid contains an object, wherein the value is 1 when the ith grid contains the object, and the value is 0 when the ith grid does not contain the object; x i represents the true value of the abscissa of the coordinates of the central point of the bounding box of the object,/>An abscissa estimate representing the coordinates of the center point of the object bounding box, y i representing the ordinate true value of the coordinates of the center point of the object bounding box,/>An ordinate estimate representing the coordinates of the center point of the bounding box of the object, w i representing the true value of the width of the bounding box body,/>An estimate representing the width of the bounding box, h i representing the true value of the bounding box height,/>Representing an estimate of the bounding box body height, and x i,/>yi、/>wi、/>hi、/>Normalization is carried out; c i represents the confidence level of the bounding box body, true value,/>A confidence estimate representing a bounding box; lambda noobj represents the weight of confidence prediction loss for bounding boxes that do not contain an object,/>Indicating whether the ith grid does not contain an object, wherein the value is 1 when the ith grid does not contain an object, and the value is 0 when the ith grid contains an object; c represents the object class, p i (c) represents the code value corresponding to the class to which the grid belongs,/>And representing the coding estimation value corresponding to the category to which the grid belongs.
Step S403, performing supervised training on the improved target GoogLeNet model based on the image sample dataset to obtain a convolutional neural network model suitable for airport ground service vehicles.
Preferably, the modified GoogLeNet model is trained using the preprocessed image sample dataset, enabling the GoogLeNet model to better identify the ground vehicles through image samples of various types of ground vehicles.
According to the method in the embodiment, the problem that the classified information dimension and the acquisition frame dimension in the GoogLeNet model are unequal can be solved, the same deviation has small influence in a large boundary frame and has large influence in a small boundary frame, the identification performance of the GoogLeNet model is improved, the object type can be judged and identified more accurately, and the GoogLeNet model can identify various airport ground service vehicles better through targeted training.
FIG. 6 shows a flow diagram of an electronic fence based out-of-range alert method in accordance with one embodiment of the present invention; as shown in fig. 6, the method includes the steps of:
Step S601, setting corresponding electronic fences at the airport for different kinds of ground service vehicles.
Preferably, different electronic fences are arranged for each different kind of airport ground service vehicles to correspond, and the running and parking areas of each airport ground service vehicle are determined. For example, an electronic fence for a ferry vehicle only comprises relevant areas such as a terminal building, a parking apron and the like. The specific area setting can be set by a worker according to actual conditions.
Preferably, the electronic fence may also be provided with a passing direction, whereby the driving direction of the vehicle may be limited.
Step S602, corresponding coordinate point data of the edge of the electronic fence is obtained according to the state information of the vehicle to be detected.
Preferably, according to the state information of the vehicle to be detected, target position information is acquired, coordinate data of an edge point of the electronic fence closest to the target position information is determined according to the target position information, and the distance between the coordinate data and the coordinate data is calculated.
Step S603, according to the state information of the vehicle to be tested, judging whether the vehicle to be tested is located in the range of the coordinate points of the edge of the electronic fence.
Preferably, a preset distance is set, the distance range of the edge point is determined by taking the preset distance as a radius, and whether the target position of the vehicle to be detected is in the range is judged.
Step S604, judging whether the vehicle to be tested crosses the boundary or not and whether an alarm is required or not according to the position relation between the target position information of the vehicle to be tested and the electronic fence and the type of the electronic fence.
If the vehicle to be tested is in the range of the electronic fence but is not in the range of the coordinate point of the edge of the electronic fence, no alarm is needed;
If the vehicle to be tested is in the range of the electronic fence but in the range of the coordinate point of the edge of the electronic fence, warning is needed;
If the vehicle to be tested is out of the range of the electronic fence, an alarm is required.
According to the method in the embodiment, the electronic fence aiming at different types of ground service vehicles is arranged to prevent the vehicles from entering the area possibly causing danger, and the safety of the integral operation of the airport is effectively improved by limiting the operation area of various vehicles in the airport.
Fig. 7 is a flow chart of a method for detecting and alarming based on the appearance of a ground service vehicle according to one embodiment of the present invention, as shown in fig. 7, the method includes the steps of:
Step S701, determining vehicle type information of the vehicle to be tested according to the identification result information of the vehicle to be tested.
Step S702, determining the direction of the vehicle to be tested and the front, back, left side and right side of the vehicle to be tested according to the movement speed information of the vehicle to be tested.
Preferably, the direction of the vehicle to be detected is determined according to the direction information in the movement speed information of the vehicle to be detected, and each surface of the appearance of the vehicle to be detected in the image is further determined based on the direction information.
Step S703, comparing the video image information of the vehicle to be tested with the corresponding direction image of the corresponding vehicle type in the database, and judging whether an alarm is needed according to the comparison result.
Preferably, images of the front side, the back side, the left side and the right side of the type of vehicle are extracted from the database according to the type of the vehicle to be detected, and each surface displayed by the vehicle to be detected in the video image information is compared with the extracted images of the corresponding surface to obtain a comparison result.
Preferably, the comparison result may be a similarity. If the similarity is greater than a preset similarity threshold, no alarm is needed; if the similarity is smaller than a preset similarity threshold, an alarm is required.
According to the method in the embodiment, through detecting the appearance of the vehicle, the operator is prompted that the related vehicle may have an operation fault or that the component is dropped, so that the risk that the airport overall operation may be caused is further reduced.
FIG. 8 shows a block diagram of a convolutional neural network based airport ground service vehicle state detection system, as shown in FIG. 8, in accordance with one embodiment of the present invention, comprising: model training module 801, recognition module 802, and calculation module 803; wherein,
The model training module 801 is configured to obtain image sample data sets of a plurality of types of ground service vehicles, and train GoogLeNet models according to the image sample data sets to obtain a convolutional neural network model applicable to the airport ground service vehicles.
In particular, the model training module 801 is further configured to,
Acquiring images of different types of ground service vehicles in four directions, namely the front face, the back face, the left side face and the right side face, and generating an image sample data set of the ground service vehicles;
classifying images in the image sample data set according to the photographing direction;
the image sample data in the image sample data set is marked according to the kind of the ground service vehicle.
In particular, the model training module 801 is further configured to,
Preprocessing an image sample dataset; the preprocessing at least comprises the steps of adding a first preset proportion of images in an image sample data set into salt and pepper noise with a mean value of 0 and a variance of 1; extracting a second preset proportion of images from the images of each ground service vehicle to zoom in or zoom out;
Based on the loss function, improving the GoogLeNet model to obtain a target GoogLeNet model;
and performing supervised training on the improved target GoogLeNet model based on the image sample dataset to obtain a convolutional neural network model suitable for airport ground service vehicles.
In particular, the model training module 801 is further configured to,
Modifying the loss function in the GoogLeNet model, wherein the modified loss function is in the form of:
Where λ coord denotes the weight of the boundary frame coordinate prediction loss, S 2 denotes the number of meshes of the image division, B denotes the number of predicted boundary frames in each mesh, i denotes the ith mesh, j denotes the jth boundary frame in the mesh, Indicating whether the ith grid contains an object, wherein the value is 1 when the ith grid contains the object, and the value is 0 when the ith grid does not contain the object; x i represents the true value of the abscissa of the coordinates of the central point of the bounding box of the object,/>An abscissa estimate representing the coordinates of the center point of the object bounding box, y i representing the ordinate true value of the coordinates of the center point of the object bounding box,/>An ordinate estimate representing the coordinates of the center point of the bounding box of the object, w i representing the true value of the width of the bounding box body,/>An estimate representing the width of the bounding box, h i representing the true value of the bounding box height,/>Representing an estimate of the bounding box body height, and x i,/>yi、/>wi、/>hi、/>Normalization is carried out; c i represents the confidence level of the bounding box body, true value,/>A confidence estimate representing a bounding box; lambda noobj represents the weight of confidence prediction loss for bounding boxes that do not contain an object,/>Indicating whether the ith grid does not contain an object, wherein the value is 1 when the ith grid does not contain an object, and the value is 0 when the ith grid contains an object; c represents the object class, p i (c) represents the code value corresponding to the class to which the grid belongs,/>And representing the coding estimation value corresponding to the category to which the grid belongs.
The recognition module 802 is configured to collect video image information of a vehicle to be detected, and input a convolutional neural network model for prediction recognition, so as to obtain recognition result information; and acquiring airport facility information and vehicle navigation information of the vehicle to be tested.
In particular, the identification module 802 is further configured to,
The method comprises the steps that video image information acquisition is carried out on a vehicle to be tested through a fixed-position camera; wherein, the fixed-position camera is fixed on airport facilities;
acquiring airport facility information, and calibrating fixed-position cameras corresponding to airport facilities according to the airport facility information; the airport facility information at least comprises at least three fixed characteristic information of the airport facility and characteristic position information corresponding to the fixed characteristic;
The identification result information at least comprises vehicle type information and vehicle two-dimensional coordinates; the vehicle navigation information includes at least time information and navigation position information.
In particular, the identification module 802 is further configured to,
Acquiring camera parameters, including at least: focal length, principal coordinate point, and pixel size information;
Constructing a projection matrix of the camera by using the camera parameters;
based on the feature position information corresponding to the fixed feature information, the projection matrix is utilized to calculate corresponding feature two-dimensional coordinates according to the feature three-dimensional coordinates of the feature position information.
The calculating module 803 is configured to calculate, based on the identification result information, airport facility information, and vehicle navigation information, to obtain vehicle state information to be measured; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle.
In particular, the computing module 803 is further configured to,
Based on the characteristic two-dimensional coordinates obtained by calculation of airport facility information and the vehicle two-dimensional coordinates in the identification result information, reversely deriving projection position information corresponding to the vehicle to be detected by utilizing a projection matrix;
matching the navigation position information in the vehicle navigation information with the projection position information to obtain high-precision time and target position information;
And based on the video image information and the vehicle navigation information, fusion estimation of the motion speed information is performed by using an extended Kalman filtering method.
The airport ground service vehicle state detection system based on the convolutional neural network further comprises: an alarm module 804;
the alarm module 804 is configured to
Setting corresponding electronic fences at airports aiming at different types of ground service vehicles; acquiring corresponding coordinate point data of the edge of the electronic fence according to the state information of the vehicle to be detected; judging whether the vehicle to be tested is positioned in the range of the coordinate point of the edge of the electronic fence according to the state information of the vehicle to be tested; judging whether the vehicle to be tested crosses the boundary or not and whether an alarm is required or not according to the position relation between the target position information of the vehicle to be tested and the electronic fence and the type of the electronic fence;
Determining vehicle type information of the vehicle to be detected according to the identification result information of the vehicle to be detected; determining the direction of the vehicle to be tested and the front, back, left side and right side of the vehicle to be tested according to the movement speed information of the vehicle to be tested; and comparing the video image information of the vehicle to be tested with the corresponding direction image of the corresponding vehicle type in the database, and judging whether an alarm is required according to the comparison result.
According to the airport ground service vehicle state detection system based on the convolutional neural network, which is provided by the embodiment, image sample data sets of a plurality of types of ground service vehicles are obtained, and a GoogLeNet model is trained according to the image sample data sets, so that a convolutional neural network model suitable for the airport ground service vehicles is obtained; collecting video image information of a vehicle to be tested, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested; calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle. According to the airport ground service vehicle state detection system based on the convolutional neural network, the GoogLeNet model is improved and trained in a targeted manner through the acquired ground service vehicle image sample, and the convolutional neural network model applicable to the airport ground service vehicle is obtained; based on the convolutional neural network model, the type of the target vehicle can be accurately identified; meanwhile, based on the corresponding relation between the fixed camera and the airport facilities, the calibration of the camera is realized according to the fixed characteristics of the airport facilities, the projection matrix is utilized for projection and back projection, the approximate position of the vehicle is rapidly determined from the ground service vehicle image to be detected acquired by the camera, then the approximate position is matched with the information acquired by the vehicle-mounted navigation system, the accurate time and position information is accurately calculated, the vehicle running speed is further calculated, the type, the position and the running speed of the vehicle are finally obtained, so that airport staff can know the related information of the ground service vehicle more rapidly and accurately, the normal running of the guarantee work of the airport is assisted, and the efficiency and the safety of the guarantee work are greatly improved; meanwhile, by arranging electronic fences aiming at different types of ground service vehicles, vehicles are prevented from entering an area possibly causing danger, and the safety of the integral operation of the airport is effectively improved by limiting the operation area of various vehicles in the airport; by detecting the appearance of the vehicle, the operator is prompted that the related vehicle may have an operation failure or that a component drop may occur, thereby further reducing the risk that may be caused to the overall operation of the airport.
Fig. 9 shows a schematic flow diagram of an airport ground service vehicle state detection system based on a convolutional neural network, in accordance with another embodiment of the invention, as shown in fig. 9, wherein,
The method comprises the steps of obtaining a vehicle picture data set, preprocessing picture data aiming at pictures in the vehicle picture data set, improving a GoogLeNet model, and training the improved model by utilizing the processed data set to obtain a convolutional neural network model suitable for airport ground service vehicle identification; inputting video data acquired by a camera into a convolutional neural network model to obtain a recognition result; calibrating the fixed-position camera, roughly estimating the position of the vehicle by utilizing the video image and calibration information of the fixed-position camera, and acquiring navigation position information and speed information from information received by the vehicle-mounted multimode navigation receiver; and inputting the identification result, the rough estimation result of the vehicle position, the acquired navigation position information and the acquired speed information into a central processing unit, and carrying out data matching and fusion on the three groups of information by the central processing unit to finally obtain the vehicle category identification information, the vehicle motion speed estimation information and the vehicle position accurate estimation information.
The invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the executable instruction can execute the airport ground service vehicle state detection method based on the convolutional neural network in any method embodiment.
FIG. 10 illustrates a schematic diagram of a computing device, according to an embodiment of the invention, the particular embodiment of the invention not being limited to a particular implementation of the computing device.
As shown in fig. 10, the computing device may include: a processor 1002, a communication interface Communications Interface, a memory 1006, and a communication bus 1008.
Wherein:
The processor 1002, communication interface 1004, and memory 1006 communicate with each other via a communication bus 1008.
Communication interface 1004 is used for communicating with network elements of other devices, such as clients or other servers.
The processor 1002 is configured to execute the program 1010, and may specifically perform the relevant steps in the embodiment of the method for detecting a status of an airport ground service vehicle based on a convolutional neural network.
In particular, program 1010 may include program code including computer operating instructions.
The processor 1002 may be a central processing unit CPU, or an application specific integrated Circuit ASIC (Applica tion SPECIFIC INTEGRATED circuits), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 1006 for storing programs 1010. The memory 1006 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The program 1010 is specifically operable to cause the processor 1002 to perform the convolutional neural network-based airport ground service vehicle state detection method of any of the method embodiments described above. The specific implementation of each step in the program 1010 may refer to corresponding steps and corresponding descriptions in the units in the embodiment of the method for detecting the status of the airport ground service vehicle based on the convolutional neural network, which are not described herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. An airport ground service vehicle state detection method based on a convolutional neural network, comprising the following steps:
acquiring image sample data sets of a plurality of types of ground service vehicles, and training GoogLeNet models according to the image sample data sets to obtain convolutional neural network models suitable for airport ground service vehicles;
Collecting video image information of a vehicle to be tested, inputting a convolutional neural network model for prediction and identification, and obtaining identification result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested; wherein,
The method comprises the steps that video image information acquisition is carried out on a vehicle to be tested through a fixed-position camera; wherein, the fixed-position camera is fixed on airport facilities;
Acquiring airport facility information, and calibrating fixed-position cameras corresponding to airport facilities according to the airport facility information; wherein, acquire camera parameter, include at least: focal length, principal coordinate point, and pixel size information; constructing a projection matrix of the camera by using the camera parameters; based on the characteristic position information corresponding to the fixed characteristic information, calculating corresponding characteristic two-dimensional coordinates by utilizing a projection matrix according to the characteristic three-dimensional coordinates of the characteristic position information; the airport facility information at least comprises at least three fixed characteristic information of the airport facility and characteristic position information corresponding to the fixed characteristic;
the identification result information at least comprises vehicle type information and vehicle two-dimensional coordinates; the vehicle navigation information at least comprises time information and navigation position information;
Calculating based on the identification result information, airport facility information and vehicle navigation information to obtain vehicle state information to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle; based on the characteristic two-dimensional coordinates obtained by calculation of airport facility information and the vehicle two-dimensional coordinates in the identification result information, reversely deriving projection position information corresponding to the vehicle to be detected by utilizing a projection matrix; matching the navigation position information in the vehicle navigation information with the projection position information to obtain high-precision time and target position information; and based on the video image information and the vehicle navigation information, fusion estimation of the motion speed information is performed by using an extended Kalman filtering method.
2. The method of claim 1, wherein the acquiring image sample data sets for a plurality of types of ground service vehicles further comprises:
acquiring images of different types of ground service vehicles in four directions, namely the front face, the back face, the left side face and the right side face, and generating an image sample data set of the ground service vehicles;
classifying images in the image sample data set according to the photographing direction;
the image sample data in the image sample data set is marked according to the kind of the ground service vehicle.
3. The method of claim 1, wherein training the GoogLeNet model from the image sample dataset results in a convolutional neural network model suitable for use with airport ground service vehicles, further comprising:
preprocessing an image sample dataset; the preprocessing at least comprises the steps of adding a first preset proportion of images in an image sample data set into salt and pepper noise with a mean value of 0 and a variance of 1; extracting a second preset proportion of images from the images of each ground service vehicle to zoom in or zoom out;
Based on the loss function, improving the GoogLeNet model to obtain a target GoogLeNet model;
and performing supervised training on the improved target GoogLeNet model based on the image sample dataset to obtain a convolutional neural network model suitable for airport ground service vehicles.
4. The method of claim 3, wherein the refining the GoogLeNet model based on the loss function results in a target GoogLeNet model, further comprising:
Modifying the loss function in the GoogLeNet model, wherein the modified loss function is in the form of:
Where λ coord denotes the weight of the boundary frame coordinate prediction loss, S 2 denotes the number of meshes of the image division, B denotes the number of predicted boundary frames in each mesh, i denotes the ith mesh, j denotes the jth boundary frame in the mesh, Indicating whether the ith grid contains an object, wherein the value is 1 when the ith grid contains the object, and the value is 0 when the ith grid does not contain the object; x i represents the true value of the abscissa of the coordinates of the central point of the bounding box of the object,/>An abscissa estimate representing the coordinates of the center point of the object bounding box, y i representing the ordinate true value of the coordinates of the center point of the object bounding box,/>An ordinate estimate representing the coordinates of the center point of the bounding box of the object, w i representing the true value of the width of the bounding box body,/>An estimate representing the width of the bounding box, h i representing the true value of the bounding box height,/>Representing an estimate of the bounding box body height, and x i,/>yi、/>wi、/>hi、/>Normalization is carried out; c i represents the confidence level of the bounding box body, true value,/>A confidence estimate representing a bounding box; lambda noobj represents the weight of confidence prediction loss for bounding boxes that do not contain an object,/>Indicating whether the ith grid does not contain an object, wherein the value is 1 when the ith grid does not contain an object, and the value is 0 when the ith grid contains an object; c represents the object class, p i (c) represents the code value corresponding to the class to which the grid belongs,/>And representing the coding estimation value corresponding to the category to which the grid belongs.
5. The method according to claim 1, wherein the method further comprises:
setting corresponding electronic fences at airports aiming at different types of ground service vehicles;
Acquiring corresponding coordinate point data of the edge of the electronic fence according to the state information of the vehicle to be detected;
Judging whether the vehicle to be tested is positioned in the range of the coordinate point of the edge of the electronic fence according to the state information of the vehicle to be tested;
And judging whether the vehicle to be tested crosses the boundary or not and whether an alarm is required or not according to the position relation between the target position information of the vehicle to be tested and the electronic fence and the type of the electronic fence.
6. The method according to claim 1, wherein the method further comprises:
determining vehicle type information of the vehicle to be detected according to the identification result information of the vehicle to be detected;
determining the direction of the vehicle to be tested and the front, back, left side and right side of the vehicle to be tested according to the movement speed information of the vehicle to be tested;
And comparing the video image information of the vehicle to be tested with the corresponding direction image of the corresponding vehicle type in the database, and judging whether an alarm is required according to the comparison result.
7. An airport ground service vehicle state detection system based on a convolutional neural network, comprising: the system comprises a model training module, an identification module and a calculation module; wherein,
The model training module is used for acquiring image sample data sets of a plurality of types of ground service vehicles, and training GoogleNet models according to the image sample data sets to obtain convolutional neural network models suitable for the airport ground service vehicles;
the recognition module is used for collecting video image information of the vehicle to be detected, inputting a convolutional neural network model for prediction recognition, and obtaining recognition result information; acquiring airport facility information and vehicle navigation information of a vehicle to be tested; wherein,
The method comprises the steps that video image information acquisition is carried out on a vehicle to be tested through a fixed-position camera; wherein, the fixed-position camera is fixed on airport facilities;
Acquiring airport facility information, and calibrating fixed-position cameras corresponding to airport facilities according to the airport facility information; wherein, acquire camera parameter, include at least: focal length, principal coordinate point, and pixel size information; constructing a projection matrix of the camera by using the camera parameters; based on the characteristic position information corresponding to the fixed characteristic information, calculating corresponding characteristic two-dimensional coordinates by utilizing a projection matrix according to the characteristic three-dimensional coordinates of the characteristic position information; the airport facility information at least comprises at least three fixed characteristic information of the airport facility and characteristic position information corresponding to the fixed characteristic;
the identification result information at least comprises vehicle type information and vehicle two-dimensional coordinates; the vehicle navigation information at least comprises time information and navigation position information;
The computing module is used for computing based on the identification result information, the airport facility information and the vehicle navigation information to obtain the state information of the vehicle to be detected; the state information of the vehicle to be detected at least comprises position information and movement speed information of the airport ground service vehicle; based on the characteristic two-dimensional coordinates obtained by calculation of airport facility information and the vehicle two-dimensional coordinates in the identification result information, reversely deriving projection position information corresponding to the vehicle to be detected by utilizing a projection matrix; matching the navigation position information in the vehicle navigation information with the projection position information to obtain high-precision time and target position information; and based on the video image information and the vehicle navigation information, fusion estimation of the motion speed information is performed by using an extended Kalman filtering method.
CN202311219844.0A 2023-09-20 2023-09-20 Airport ground service vehicle state detection method and system based on convolutional neural network Active CN117274862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311219844.0A CN117274862B (en) 2023-09-20 2023-09-20 Airport ground service vehicle state detection method and system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311219844.0A CN117274862B (en) 2023-09-20 2023-09-20 Airport ground service vehicle state detection method and system based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN117274862A CN117274862A (en) 2023-12-22
CN117274862B true CN117274862B (en) 2024-04-30

Family

ID=89200266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311219844.0A Active CN117274862B (en) 2023-09-20 2023-09-20 Airport ground service vehicle state detection method and system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN117274862B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2012128666A (en) * 2012-07-09 2014-01-20 Общество с ограниченной ответственностью "ПетроФайбер" METHOD AND SYSTEM FOR OBSERVING LAND MOVEMENT OF MOBILE OBJECTS WITHIN THE ESTABLISHED AERODROME ZONE
US9000952B1 (en) * 2013-06-25 2015-04-07 Rockwell Collins, Inc. Airport surface information presentation methods for the pilot including taxi information
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
CN109413581A (en) * 2018-10-19 2019-03-01 海南易乐物联科技有限公司 A kind of vehicle over-boundary identification alarm method and system based on electronic grille fence
KR102023072B1 (en) * 2018-07-16 2019-11-20 아마노코리아 주식회사 Indoor localization system for vehicle
CN111832660A (en) * 2020-07-23 2020-10-27 南京联云智能系统有限公司 Multi-target real-time detection method for mobile ship
CN113609895A (en) * 2021-06-22 2021-11-05 上海中安电子信息科技有限公司 Road traffic information acquisition method based on improved Yolov3
CN114527480A (en) * 2021-10-28 2022-05-24 江苏集萃未来城市应用技术研究所有限公司 Precise positioning method for ground service vehicles in complex environment of airport
CN115717894A (en) * 2022-12-02 2023-02-28 大连理工大学 Vehicle high-precision positioning method based on GPS and common navigation map

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2012128666A (en) * 2012-07-09 2014-01-20 Общество с ограниченной ответственностью "ПетроФайбер" METHOD AND SYSTEM FOR OBSERVING LAND MOVEMENT OF MOBILE OBJECTS WITHIN THE ESTABLISHED AERODROME ZONE
US9000952B1 (en) * 2013-06-25 2015-04-07 Rockwell Collins, Inc. Airport surface information presentation methods for the pilot including taxi information
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
KR102023072B1 (en) * 2018-07-16 2019-11-20 아마노코리아 주식회사 Indoor localization system for vehicle
CN109413581A (en) * 2018-10-19 2019-03-01 海南易乐物联科技有限公司 A kind of vehicle over-boundary identification alarm method and system based on electronic grille fence
CN111832660A (en) * 2020-07-23 2020-10-27 南京联云智能系统有限公司 Multi-target real-time detection method for mobile ship
CN113609895A (en) * 2021-06-22 2021-11-05 上海中安电子信息科技有限公司 Road traffic information acquisition method based on improved Yolov3
CN114527480A (en) * 2021-10-28 2022-05-24 江苏集萃未来城市应用技术研究所有限公司 Precise positioning method for ground service vehicles in complex environment of airport
CN115717894A (en) * 2022-12-02 2023-02-28 大连理工大学 Vehicle high-precision positioning method based on GPS and common navigation map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机场特种车辆监控与调度系统设计与实现;焦星云 等;民航学报;20230531;第7卷(第3期);112-115 *

Also Published As

Publication number Publication date
CN117274862A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN108389421B (en) Parking lot accurate induction system and method based on image re-identification
CN111507989A (en) Training generation method of semantic segmentation model, and vehicle appearance detection method and device
CN1940591A (en) System and method of target tracking using sensor fusion
CN112861631B (en) Wagon balance human body intrusion detection method based on Mask Rcnn and SSD
CN108364466A (en) A kind of statistical method of traffic flow based on unmanned plane traffic video
CN115205796B (en) Rail line foreign matter intrusion monitoring and risk early warning method and system
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN115171045A (en) YOLO-based power grid operation field violation identification method and terminal
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
US11899750B2 (en) Quantile neural network
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN112699748B (en) Human-vehicle distance estimation method based on YOLO and RGB image
CN117274862B (en) Airport ground service vehicle state detection method and system based on convolutional neural network
US20230314169A1 (en) Method and apparatus for generating map data, and non-transitory computer-readable storage medium
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
CN114495421B (en) Intelligent open type road construction operation monitoring and early warning method and system
CN116109047A (en) Intelligent scheduling method based on three-dimensional intelligent detection
CN113033443B (en) Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method
CN114581863A (en) Vehicle dangerous state identification method and system
CN111857113B (en) Positioning method and positioning device for movable equipment
CN117590863B (en) Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with
CN114979567B (en) Object and region interaction method and system applied to video intelligent monitoring
CN117809261B (en) Unmanned aerial vehicle image processing method based on deep learning
CN112418003B (en) Work platform obstacle recognition method and system and anti-collision method and system
CN115683109B (en) Visual dynamic obstacle detection method based on CUDA and three-dimensional grid map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant