CN109063576B - Management method and device for flight action nodes - Google Patents

Management method and device for flight action nodes Download PDF

Info

Publication number
CN109063576B
CN109063576B CN201810732611.3A CN201810732611A CN109063576B CN 109063576 B CN109063576 B CN 109063576B CN 201810732611 A CN201810732611 A CN 201810732611A CN 109063576 B CN109063576 B CN 109063576B
Authority
CN
China
Prior art keywords
data
flight
image
image data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810732611.3A
Other languages
Chinese (zh)
Other versions
CN109063576A (en
Inventor
王汉洋
王弘尧
刘鑫
董硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Giai Intelligent Technology Co ltd
Original Assignee
Beijing Giai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Giai Intelligent Technology Co ltd filed Critical Beijing Giai Intelligent Technology Co ltd
Priority to CN201810732611.3A priority Critical patent/CN109063576B/en
Publication of CN109063576A publication Critical patent/CN109063576A/en
Application granted granted Critical
Publication of CN109063576B publication Critical patent/CN109063576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a management method and device for flight action nodes. The management method comprises the steps of collecting image data of a preset flight action node; inputting the image data into an image recognition server; configuring a data interface between the image recognition server and a flight finger center; and sending the data of the flight action node to the flight instruction center according to the data interface. The method and the device solve the technical problem of poor management effect of the flight action nodes. The application realizes the following technical effects: the method comprises the steps of dynamically recording key nodes, capturing abnormal information in real time, establishing a management application platform based on the abnormal information, and feeding back the abnormal information to an existing application system according to airport data exchange specifications so as to achieve the purposes of improving flight operation efficiency, improving safety management level and improving industry service quality.

Description

Management method and device for flight action nodes
Technical Field
The application relates to the field of intelligent airports and image recognition, in particular to a management method and device for flight action nodes.
Background
In the intelligent airport management, the realization of the intelligent management application function of the flight action node can provide powerful information reference for the control of each flight state and the construction of the intelligent airport.
The inventor finds that the current flight action node counts: the monitoring only plays a role in tracing and reverse checking after an event occurs, and real-time dynamic processing of videos cannot be achieved. Further, if one wants to know the occurrence time of each action node involved in the process from landing to taking off of one airplane, the operation can be performed only in a manual recording mode, and a large amount of manual work is consumed. Meanwhile, in an airport, a large number of flights take off and land every day, the occurrence time of each action node is difficult to be regulated, and the function of counting the time of a large number of flight action nodes is difficult to realize only by manual operation.
Aiming at the problem of poor management effect of flight action nodes in the related art, no effective solution is provided at present.
Disclosure of Invention
The present application mainly aims to provide a method and an apparatus for managing flight action nodes, so as to solve the problem of poor management effect for flight action nodes.
To achieve the above object, according to one aspect of the present application, there is provided a management method for a flight action node.
The management method for the flight action node comprises the following steps:
collecting image data of a preset flight action node;
inputting the image data into an image recognition server;
configuring a data interface between the image recognition server and a flight finger center; and
and sending the data of the flight action node to the flight operation center according to the data interface.
Further, the collecting the image data of the preset flight action node includes:
acquiring access authority of an image acquisition device at a preset position in an airport;
accessing and acquiring the image data acquired by the image acquisition device through the access authority; and
the image data is divided into video image data and picture image data.
Further, inputting the image data into an image recognition server includes:
the image recognition server receives mark data selected by a user;
the image recognition server obtains training data according to a preset image training degree input by a user; and
and the image recognition server generates an image recognition model for recognizing the expected recognition target according to the training data.
Further, configuring the data interface between the image recognition server and the flight index center comprises:
establishing a link between the image recognition server and an application docking server;
configuring a data format of a processing result in the image recognition server;
and configuring a data interface in the application docking server according to a preset flight transport center interface specification.
Further, sending the data of the flight action node to the flight referral center according to the data interface comprises any one or more of the following:
sending the flight action time generated at the flight action node to the flight operation center according to the data interface;
sending a content preference flight action record set at the flight action node to the flight operation center according to the data interface;
and sending the flight action visual content generated at the flight action node to the flight operation center according to the data interface.
Further, after sending the data of the flight action node to the flight referral center according to the data interface, the method further includes:
accessing data of the flight action node to a management application software program on a terminal through a switch;
and visually outputting the flight action node through the management application software program.
In order to achieve the above object, according to another aspect of the present application, there is provided a management apparatus for a flight action node.
The management device for the flight action node comprises:
the acquisition module is used for acquiring image data of a preset flight action node;
the input module is used for inputting the image data into an image recognition server;
the configuration module is used for configuring a data interface between the image recognition server and a flight finger operation center; and
and the sending module is used for sending the data of the flight action node to the flight operation center according to the data interface.
Further, the acquisition module comprises:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring the access authority of an image acquisition device at a preset position in an airport;
the access unit is used for accessing and acquiring the image data acquired by the image acquisition device through the access authority; and
a dividing unit for dividing the image data into video image data and picture image data.
Further, the input module includes:
a receiving unit for receiving the tag data selected by the user by the image recognition server;
the training data unit is used for obtaining training data according to a preset image training degree input by a user by the image recognition server; and
and the generating unit is used for generating an image recognition model for recognizing the expected recognition target according to the training data by the image recognition server.
Further, the configuration module includes:
the establishing unit is used for establishing a link between the image recognition server and an application docking server;
the configuration unit is used for configuring the data format of the processing result in the image recognition server;
and the docking unit is used for configuring a data interface in the application docking server according to the preset flight transport center interface specification.
In the embodiment of the application, the background image recognition server recognizes and recognizes the image data by collecting the image data of the preset flight action node, so that the purpose of recognizing the flight action node is achieved, the technical effect of managing the flight action node is achieved, and the technical problem that the management effect of the flight action node is poor is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic diagram of a management method for flight action nodes according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a management method for flight action nodes according to a second embodiment of the present application;
fig. 3 is a schematic diagram of a management method for flight action nodes according to a third embodiment of the present application;
FIG. 4 is a schematic diagram of a management method for flight action nodes according to a fourth embodiment of the present application;
fig. 5 is a schematic diagram of a management method for a flight action node according to a fifth embodiment of the present application;
fig. 6 is a schematic diagram of a management method for a flight action node according to a sixth embodiment of the present application;
fig. 7 is a schematic diagram of a management device for a flight action node according to a first embodiment of the present application;
fig. 8 is a schematic view of a management device for a flight action node according to a second embodiment of the present application;
fig. 9 is a schematic view of a management device for a flight action node according to a third embodiment of the present application;
fig. 10 is a schematic view of a management apparatus for a flight action node according to a fourth embodiment of the present application; and
fig. 11 is a schematic diagram of a management device for a flight action node according to a fifth embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the application, a management method for flight action nodes is provided.
As shown in fig. 1, the method includes steps S102 to S108 as follows:
step S102, collecting image data of a preset flight action node;
preferably, the preset flight action nodes can be the entering of an aircraft, the upper wheel block, the butt joint of a corridor bridge opening, the opening of a passenger cabin door, the opening of a cargo cabin door, the leaving of a passenger, the leaving of a unit, the taking of a unit, the getting on of a passenger cabin door, the closing of a cargo cabin door, the pushing out of the corridor bridge opening, the wheel removing and the pushing out of the aircraft. The number of nodes can be adjusted and new nodes can be added according to the requirements of the airport.
The image data of the preset flight action node is acquired by acquiring video and image data through equipment such as an airport camera.
Step S104, inputting the image data into an image recognition server;
preferably, the image recognition server is used for recognizing the flight action node from the image data.
The inputting of the image data into the image recognition server may be uploading the video and the image data obtained by the camera or other devices to the cloud image recognition server or the local image recognition server.
Step S106, configuring a data interface between the image recognition server and a flight finger operation center; and
preferably, the data interface may be an API data interface, a JSON data interface, or the like. The scope of protection is not limited by the type of data interface.
The data interface for configuring the image recognition server and the flight fingering center can be used for connecting the image recognition server and the flight fingering center through a data interface.
And step S108, sending the data of the flight action node to the flight instruction center according to the data interface.
Preferably, the sending of the data of the flight action node to the flight referral center according to the data interface may be a transmission of the flight action node identified by the image recognition server to the flight referral center through the data interface.
As shown in fig. 2, the acquiring of the image data of the preset flight action node includes steps S202 to S206 as follows:
step S202, obtaining the access authority of the image acquisition device at a preset position in an airport;
preferably, the image acquisition device may be an airport camera, a sensor, or the like.
The access right for acquiring the image acquisition device at the preset position in the airport can be to acquire image data or video data acquired by the image acquisition device through a data interface.
Step S204, accessing and acquiring the image data acquired by the image acquisition device through the access authority; and
preferably, the accessing and acquiring of the image data acquired by the image acquisition device through the access right may be acquiring of the image data or the video data acquired by the image acquisition device through a data interface.
Step S206, divides the image data into video image data and picture image data.
Preferably, the server classifies the image data, which may be video image data or picture image data.
As shown in fig. 3, inputting the image data into the image recognition server includes steps S302 to S306 as follows:
step S302, the image recognition server receives mark data selected by a user;
preferably, the tag data may be a picture image tag of the picture image data, and may also be a video image tag of the video image data.
Receiving the user-selected marking data may be receiving the user-selected data type, file, tag, etc. at the marking system.
Step S304, the image recognition server obtains training data according to the preset image training degree input by the user; and
preferably, the preset image training level may be a user's identity selection on his or her own level, such as a beginner or an expert.
The training data obtained according to the preset image training degree input by the user can be obtained by the user through identity selection (beginner or expert) according to the self level, and relevant information passes through a database server; if the user selects the identity of a beginner, the system recommends a network for the user; adding a marking project and starting marking; training is started.
The training data obtained according to the preset image training degree input by the user can also be obtained by the system recommending a network for the user if the user selects the 'expert' identity; adding a marking project and starting marking; selecting a network; setting parameters; training is started.
And step S306, the image recognition server generates an image recognition model for recognizing the expected recognition target according to the training data.
Preferably, the intended recognition target may be picture image data to be recognized provided by the user, and may also be video image data to be recognized provided by the user.
Generating an image recognition model for recognizing the expected recognition target according to the training data may be based on a selected network for which the system recommends a deployment; whether the model needs to be optimized or not, and selecting other deployment modes; and completing deployment.
Specifically, in step S302, the image recognition server receives the mark data selected by the user, and the mark data may be of a data type, such as picture image data, or video image data. The receiving of the data type selected by the user may be receiving picture image data selected by the user or receiving video image data selected by the user. If the data type is picture image data, receiving a picture image tag selected by a user for a target object in the picture image data; and preferably, if the data type is picture image data, receiving a picture image tag selected by a user for a target object in the picture image data, for example, the picture image tag can be a visitor or the like. And determining the position mark of the image label by the user. Preferably, user selectable position tags are provided in the system, such as picture top right, top left, middle, bottom right, bottom left, etc. Determining the location label of the image label by the user may be determining the location label of the image label according to the location label selected by the user.
Specifically, in step S302, the receiving of the mark data selected by the user by the image recognition server may be receiving the data type selected by the user; preferably, the data type may be video image data, and may also be picture image data. The receiving of the data type selected by the user may be receiving video image data selected by the user or receiving picture image data selected by the user. If the data type is video image data, receiving a video image label selected by a user for an action frame in the video image data; and preferably the video image tag may be some action that occurred between action frames, e.g. a guest action occurred between frame 5 to frame 15. If the data type is video image data, receiving a user selection of a video image tag for an action frame in the video image data may be receiving a user selection of a plurality of video image tags in the video image data. And determining the action length of the frame segment of the video image label by the user. Preferably, the frame fragment action length may be the duration of time a certain action occurs. For example, determining the frame segment action length of the video image tag by the user can be determining that the action of going to the guest occurs between the 5 th frame and the 15 th frame, and can also be the action occurring in any other markable video.
Specifically, in step S304, the image recognition server obtains the training data according to the preset image training degree input by the user, where the training data may be obtained according to the preset image training degree input by the user, and the first user processing identity is obtained; preferably, the first user enters the training system, logs in the system or registers and provides user information, and relevant information passes through the database server; the user selects the identity (beginner or expert) according to the self level, and the related information passes through the database server; if the user selects the identity of a beginner, the system recommends a network for the user; if the user selects the "expert" identity, the system recommends the network for it. Recommending a network model to the first user according to the first user processing identity; preferably, the network model is recommended to the first user based on the identity tag selected by the first user. For example, a first user selects a "beginner" identity for which a simpler network model is recommended; as another example, the first user selects the "expert" identity for which a more complex network model is recommended. Importing first mark data selected by the first user; and for example, the first flag data may be a picture image tag of the picture image data and may also be a video image tag of the video image data. Importing the first marking data selected by the first user can be importing the data type, file, label and the like selected by the first user in a marking system. And determining the deployment mode of the network model according to the network model and the first mark data. Preferably, the network model deployment mode may be to input the imported first tag data into the recommended network model.
Specifically, in step S304, the image recognition server obtains the training data according to the preset image training degree input by the user, or obtains the second user processing identity according to the preset image training degree input by the user; preferably, the second user enters the training system, logs in the system or registers and provides user information, and related information passes through the database server; the user selects the identity (beginner or expert) according to the self level, and the related information passes through the database server; if the user selects the identity of a beginner, the system recommends a network for the user; if the user selects the "expert" identity, the system recommends the network for it. Opening a training data interface to the second user according to the second user processing identity; preferably, the step of opening the training data interface to the second user according to the second user process identity may be a step of providing the second user with an interface for starting a training model. Triggering second marking data selected by a second user according to data marking operation of the second user; preferably, the second tagged data is tagged according to the data that has been selected by the second user, and the corresponding data tag is invoked. Inputting the selected network model and the training parameters according to the training data interface; and determining the deployment mode of the network model through the network model, the training parameters and the second marking data. Preferably, the training and deployment modes of the network model are determined according to the recommended network model, the selected training parameters and the provided label data. For example, the network model deployment mode may be how many layers of the model are determined, and may also be a deployment mode such as Bridge Driver, Overlay Driver, and the like.
Specifically, in step S306, the image recognition server generates an image recognition model for recognizing the expected recognition target according to the training data, where the image recognition model is login information of the receiving user; preferably, the login information can be an account number and a password. The login information of the user can be the member identity and the history processing data of the user determined according to the account number, the password and other information input by the user. Determining an expected recognition target and import mark data set after a user logs in; preferably, the intended recognition target may be a picture image recognition target, and may also be a video image recognition target. The determination of the expected recognition target set after the user logs in and the importing of the tag data may be the determination of the recognition target of the user and the importing of the tag data selected by the user or historical tag data. Receiving a data generation operation instruction of a user; and preferably, the step of receiving the data generation operation instruction of the user may be to provide a generation operation control button after the user logs in, selects the expected recognition target and imports the mark data, and the background receives the operation instruction after the user clicks the operation control. And training and generating an image recognition model according to the expected recognition target and the marking data according to the data generation operation instruction. Preferably, an image recognition model is generated in the background according to the information acquired in the above steps, and a recognition result is provided for the user through recognition.
As shown in fig. 4, configuring the data interface between the image recognition server and the flight index center includes the following steps S402 to S406:
step S402, establishing a link between the image recognition server and an application docking server;
step S404, configuring the data format of the processing result in the image recognition server;
preferably, the data of the processing result of the image recognition server is sorted.
For example, data sent by a server is acquired through ajax, required data is acquired according to the received data, the JSON format is converted into object JSON.
Step S406, configuring a data interface in the application docking server according to a preset flight transport center interface specification.
As shown in fig. 5, sending the data of the flight action node to the flight referral center according to the data interface includes any one or more of the following:
sending the flight action time generated at the flight action node to the flight operation center according to the data interface;
preferably, the flight action time may be the time of occurrence of each flight action node in particular.
For example, the guest-on time is 5: 00-5: 15.
sending a content preference flight action record set at the flight action node to the flight operation center according to the data interface;
preferably, the content preference flight action record may be a flight action node that illustrates the identifying server setting that needs to be recorded.
For example, one or more of aircraft docking, upper wheel block, galley bridge docking, passenger bay door opening, cargo bay door opening, passenger disembarking, crew boarding, passenger bay door closing, cargo bay door closing, galley bridge ejection, wheel chock removal, and aircraft ejection.
And sending the flight action visual content generated at the flight action node to the flight operation center according to the data interface.
Preferably, the flight action visual content may be a frame action segment in the picture image data or video image data of the flight action node collected by the camera.
Such as a guest video clip.
As shown in fig. 6, after the data of the flight action node is sent to the flight referral center according to the data interface, the following steps S602 to S604 are further included:
step S602, the data of the flight action node is accessed into a management application software program on the terminal through a switch;
preferably, the switch may be an ethernet switch, a fiber switch, or the like.
The management application on the terminal may be management software for use in an airline finger center.
And step S604, visually outputting the flight action node through the management application software program.
Preferably, the flight action node is visually output through the management application software program, and the visual result may be a visual result such as picture data or video data of the flight action node.
From the above description, it can be seen that the following technical effects are achieved by the present application: the application of the invention aims to disk the existing video monitoring resources and apply the existing discrete, passive and static video monitoring resources to the actual operation guarantee, daily service and safety management process. The camera is changed into a sensor for dynamically recording key nodes and capturing abnormal information in real time, a management application platform is established based on the sensor, and the sensor is fed back to an existing application system according to airport data exchange specifications, so that the purposes of improving flight operation efficiency, improving safety management level and improving industry service quality are achieved. A good foundation is laid for intelligent airport construction and future airport planning landing.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a management apparatus for implementing the above-mentioned flight action node management method, as shown in fig. 7, the apparatus includes:
the acquisition module 10 is used for acquiring image data of a preset flight action node;
an input module 20, configured to input the image data into an image recognition server;
a configuration module 30, configured to configure a data interface between the image recognition server and a flight index center; and
and the sending module 40 is configured to send the data of the flight action node to the flight referral center according to the data interface.
As shown in fig. 8, the acquisition module 10 includes:
an obtaining unit 101, configured to obtain an access right of an image capturing device at a preset position in an airport;
the access unit 102 is configured to access and acquire image data acquired by the image acquisition device through the access right; and
a dividing unit 103 configured to divide the image data into video image data and picture image data.
As shown in fig. 9, the input module 20 includes:
a receiving unit 201 for the image recognition server to receive the tag data selected by the user;
a training data unit 202, configured to obtain training data according to a preset image training degree input by a user by the image recognition server; and
a generating unit 203, configured to generate, by the image recognition server, an image recognition model for recognizing the expected recognition target according to the training data.
As shown in fig. 10, the configuration module 30 includes:
an establishing unit 301, configured to establish a link between the image recognition server and an application docking server;
a configuration unit 302, configured to configure a data format of a processing result in the image recognition server;
the docking unit 303 is configured to configure a data interface in the application docking server according to a preset flight transport center interface specification.
As shown in fig. 11, the work flow of the management apparatus for the flight action node is as follows:
acquiring video and image data through an airport camera; transmitting the obtained data into a video analysis server provided with flight action node intelligent management application software, and carrying out data analysis and processing in the video analysis server; transmitting the result obtained by the analysis and processing of the last step to an airport fortune finger center application docking server in an interface mode, wherein the interface coordination follows the fortune finger center interface technical specification; the software has a report generation function, and can easily realize visualization on various devices for the occurrence time of each action node and other interested statistics and results of airports.
The invention mainly has the following technical effects: 1) the software runs off line, so that higher safety guarantee is provided for customers; 2) meticulous model design client manual processing of mass data; 3) leading algorithms offer more application possibilities; 4) the method has a report generation function, and result visualization can be easily realized.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. A method for managing flight action nodes, comprising:
collecting image data of a preset flight action node;
inputting the image data into an image recognition server;
configuring a data interface between the image recognition server and a flight finger center; and
sending the data of the flight action node to the flight fortune direction center according to the data interface;
inputting the image data into an image recognition server includes: the image identification server receives marking data selected by a user, wherein the marking data is a picture image label of the picture image data or a video image label of the video image data, and the marking data selected by the user is a data type, a file, a label and the like selected by the user in a marking system; if the data type is video image data, receiving an action frame in the video image data by a user, wherein the selected video image tag is the selection of the user on a plurality of video image tags in the video image data; determining a frame segment action length of the video image label by a user, wherein the frame segment action length can be a time length of a certain action;
the image recognition server obtains training data according to a preset image training degree input by a user; and
the image recognition server generates an image recognition model for recognizing an expected recognition target according to the training data;
wherein, the sending of the data of the flight action node to the flight referral center according to the data interface comprises any one or more of the following:
sending the flight action time generated at the flight action node to the flight operation center according to the data interface;
sending a content preference flight action record set at the flight action node to the flight operation center according to the data interface;
and sending flight action visual content generated at the flight action node to the flight operation center according to the data interface, wherein the flight action visual content is a frame action segment in picture image data or video image data of the flight action node acquired by a camera.
2. The management method according to claim 1, wherein collecting image data of the preset flight action node comprises:
acquiring access authority of an image acquisition device at a preset position in an airport;
accessing and acquiring the image data acquired by the image acquisition device through the access authority; and
the image data is divided into video image data and picture image data.
3. The method of claim 1, wherein configuring the data interface between the image recognition server and the flight referral center comprises:
establishing a link between the image recognition server and an application docking server;
configuring a data format of a processing result in the image recognition server;
and configuring a data interface in the application docking server according to a preset flight transport center interface specification.
4. The method of claim 1, wherein sending the flight action node data to the flight referral center according to the data interface further comprises:
accessing data of the flight action node to a management application software program on a terminal through a switch;
and visually outputting the flight action node through the management application software program.
5. A management apparatus for a flight action node, comprising:
the acquisition module is used for acquiring image data of a preset flight action node;
the input module is used for inputting the image data into an image recognition server;
the configuration module is used for configuring a data interface between the image recognition server and a flight finger operation center; and
the sending module is used for sending the data of the flight action node to the flight operation center according to the data interface;
wherein the input module, inputting the image data into the image recognition server, comprises: the image identification server receives marking data selected by a user, wherein the marking data is a picture image label of the picture image data or a video image label of the video image data, and the marking data selected by the user is a data type, a file, a label and the like selected by the user in a marking system; if the data type is video image data, receiving an action frame in the video image data by a user, wherein the selected video image tag is the selection of the user on a plurality of video image tags in the video image data; determining a frame segment action length of the video image label by a user, wherein the frame segment action length can be a time length of a certain action;
the image recognition server obtains training data according to a preset image training degree input by a user; and
the image recognition server generates an image recognition model for recognizing an expected recognition target according to the training data;
the sending module is used for sending the data of the flight action node to the flight fortune direction center according to the data interface, and the sending module comprises any one or more of the following components:
sending the flight action time generated at the flight action node to the flight operation center according to the data interface;
sending a content preference flight action record set at the flight action node to the flight operation center according to the data interface;
and sending flight action visual content generated at the flight action node to the flight operation center according to the data interface, wherein the flight action visual content is a frame action segment in picture image data or video image data of the flight action node acquired by a camera.
6. The management device according to claim 5, wherein the collection module comprises:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring the access authority of an image acquisition device at a preset position in an airport;
the access unit is used for accessing and acquiring the image data acquired by the image acquisition device through the access authority; and
a dividing unit for dividing the image data into video image data and picture image data.
7. The management device of claim 5, wherein the configuration module comprises:
the establishing unit is used for establishing a link between the image recognition server and an application docking server;
the configuration unit is used for configuring the data format of the processing result in the image recognition server;
and the docking unit is used for configuring a data interface in the application docking server according to the preset flight transport center interface specification.
CN201810732611.3A 2018-07-05 2018-07-05 Management method and device for flight action nodes Active CN109063576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810732611.3A CN109063576B (en) 2018-07-05 2018-07-05 Management method and device for flight action nodes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810732611.3A CN109063576B (en) 2018-07-05 2018-07-05 Management method and device for flight action nodes

Publications (2)

Publication Number Publication Date
CN109063576A CN109063576A (en) 2018-12-21
CN109063576B true CN109063576B (en) 2021-12-17

Family

ID=64819492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810732611.3A Active CN109063576B (en) 2018-07-05 2018-07-05 Management method and device for flight action nodes

Country Status (1)

Country Link
CN (1) CN109063576B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887343B (en) * 2019-04-04 2020-08-25 中国民航科学技术研究院 Automatic acquisition and monitoring system and method for flight ground service support nodes
CN110097659A (en) * 2019-05-16 2019-08-06 深圳市捷赛机电有限公司 Catch, the time recording method for removing catch and Related product on a kind of aircraft
CN110379209B (en) * 2019-07-22 2021-11-09 捻果科技(深圳)有限公司 Flight operation flow node specification monitoring and alarming method
CN113096108A (en) * 2021-04-21 2021-07-09 安徽声讯信息技术有限公司 Accurate docking method for interface test
CN115826464B (en) * 2022-11-29 2024-03-22 航科院中宇(北京)新技术发展有限公司 Acquisition method of remote site node acquisition system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426331A (en) * 2013-08-22 2013-12-04 南京莱斯信息技术股份有限公司 Multi-airport collaborative delivery system flight sorting and decision making method
CN103679341A (en) * 2013-11-18 2014-03-26 南京航空航天大学 Flight ground operation support efficiency evaluation method
CN106067028A (en) * 2015-04-19 2016-11-02 北京典赞科技有限公司 The modeling method of automatic machinery based on GPU study

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426331A (en) * 2013-08-22 2013-12-04 南京莱斯信息技术股份有限公司 Multi-airport collaborative delivery system flight sorting and decision making method
CN103679341A (en) * 2013-11-18 2014-03-26 南京航空航天大学 Flight ground operation support efficiency evaluation method
CN106067028A (en) * 2015-04-19 2016-11-02 北京典赞科技有限公司 The modeling method of automatic machinery based on GPU study

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
中国民航创新方法思考与机场技术创新实践;郑洪峰;《民航管理》;20180131;第34页 *
对A-CDM系统建设的探索与思考;黄鑑;《民航管理》;20170831;第19页 *
黄鑑.对A-CDM系统建设的探索与思考.《民航管理》.2017,第19页. *

Also Published As

Publication number Publication date
CN109063576A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109063576B (en) Management method and device for flight action nodes
US10387976B2 (en) Federated system for centralized management and distribution of content media
EP4121916B1 (en) Available vehicle parking space dispatch
CN103824340B (en) Unmanned plane power transmission line intelligent cruising inspection system and method for inspecting
US7212113B2 (en) Passenger and item tracking with system alerts
CN111612933A (en) Augmented reality intelligent inspection system based on edge cloud server
CN105004934B (en) A kind of electromagnetic radiation monitoring system
CN108510183A (en) A kind of Power Line Inspection System based on unmanned plane
US11861911B2 (en) Video analytics platform for real-time monitoring and assessment of airplane safety processes
US20200233650A1 (en) Systems and methods for collecting, monitoring, and analyzing vehicle data from a plurality of vehicles using edge computing
US8615418B1 (en) System and method for managing transportation transactions
Shao et al. Estimating taxi demand-supply level using taxi trajectory data stream
CN110443521A (en) Flight operation risk acquisition methods, system and computer equipment
US9959334B1 (en) Live drone observation data recording
CN110390232A (en) Confirm method, apparatus, server and the system of irregular driving
US20220327652A1 (en) Multi-modal mobility management solutions framework
CN105260426A (en) Big data based airplane comprehensive health management system and method
CN115695541A (en) Method, device and equipment for monitoring dot polling based on edge calculation and storage medium
US20190279439A1 (en) Facilitating capturing aircraft flight segment
CN108347698A (en) A kind of on-line off-line event trace analysis method, apparatus and system
Anwar et al. Busviz: Big data for bus fleets
Classen et al. Modern airport management–fostering individual door-to-door travel
CN114841712B (en) Method and device for determining illegal operation state of network appointment vehicle tour and electronic equipment
CN115098564A (en) Passenger travel demand analysis method and system
US10614392B1 (en) Graphical flight dashboard display and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant