CN113177459A - Intelligent video analysis method and system for intelligent airport service - Google Patents

Intelligent video analysis method and system for intelligent airport service Download PDF

Info

Publication number
CN113177459A
CN113177459A CN202110446349.8A CN202110446349A CN113177459A CN 113177459 A CN113177459 A CN 113177459A CN 202110446349 A CN202110446349 A CN 202110446349A CN 113177459 A CN113177459 A CN 113177459A
Authority
CN
China
Prior art keywords
information
area
obtaining
video
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110446349.8A
Other languages
Chinese (zh)
Inventor
赵伟
吴剑清
李飞
陈凯华
石耿修
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunsai Zhilian Co ltd
Original Assignee
Yunsai Zhilian Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunsai Zhilian Co ltd filed Critical Yunsai Zhilian Co ltd
Priority to CN202110446349.8A priority Critical patent/CN113177459A/en
Publication of CN113177459A publication Critical patent/CN113177459A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a video intelligent analysis method and a video intelligent analysis system for intelligent airport service, wherein first area information is obtained by dividing information according to service areas; obtaining a first area service type according to the first area information; obtaining first area video information; obtaining image extraction characteristic information according to a first regional service type; extracting the characteristics of the video information of the first area according to the image extraction characteristic information to obtain the image characteristics of the first area; extracting characteristic information according to the first regional service type and the image to obtain a first regional service analysis standard; inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result; and obtaining a first area service instruction according to the first area analysis result. The airport video monitoring system solves the technical problems that in the prior art, an airport video monitoring system is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene, and is limited in service range.

Description

Intelligent video analysis method and system for intelligent airport service
Technical Field
The invention relates to the technical field of data processing, in particular to a video intelligent analysis method and system for intelligent airport service.
Background
With the rapid development of the civil aviation industry, the traditional airport operation mode can not meet the requirement of economic growth gradually, and as corollary equipment of the civil aviation industry, the airport construction slowly enters the rapid development stage. At present, countries begin to gradually explore and establish more reasonable airport operation modes, and based on the more reasonable airport operation modes, how airports are changed from operation modes to basic airports and how airports are developed into intelligent airports are introduced. With the development of information technology and internet, airports have gained a great deal of development as the most direct and convenient transportation mode among cities, and the main situation and challenge are faced by airport security and security work. In the field of civil aviation, safety and efficiency are always the focus of attention of all parties. How to build an intelligent security system and provide convenient, quick and comfortable experience for passengers has become a new direction for intelligent construction of various large airports in China.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the airport video monitoring system in the prior art is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene, and has the technical problem of limited service range.
Disclosure of Invention
The embodiment of the application provides an intelligent video analysis method and system for intelligent airport service, and solves the technical problems that an airport video monitoring system in the prior art is mainly used for security monitoring, is low in activity, cannot perform target service monitoring analysis in a dynamic scene, and is limited in service range.
In view of the foregoing problems, the present application provides a method and a system for intelligently analyzing videos of an intelligent airport service.
In a first aspect, an embodiment of the present application provides a method for intelligently analyzing videos of an intelligent airport service, where the method includes: obtaining service area division information; obtaining first area information according to the service area division information; obtaining a first area service type according to the first area information; obtaining first area video information; acquiring image extraction characteristic information according to the first regional service type; extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics; obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information; inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result; and obtaining a first area service instruction according to the first area analysis result.
On the other hand, this application still provides a video intelligence analytic system of wisdom airport service, the system includes:
a first obtaining unit configured to obtain service area division information;
a second obtaining unit, configured to obtain first area information according to the service area division information;
a third obtaining unit, configured to obtain a first regional service type according to the first regional information;
a fourth obtaining unit configured to obtain first region video information;
a fifth obtaining unit, configured to obtain image extraction feature information according to the first regional service type;
the first extraction unit is used for extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics;
a sixth obtaining unit, configured to obtain a first regional service analysis standard according to the first regional service type and the image extraction feature information;
the first execution unit is used for inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result;
a seventh obtaining unit, configured to obtain a first area service instruction according to the first area analysis result.
In a third aspect, the present invention provides a video intelligent analysis system for intelligent airport service, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor implements the steps of the method of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the embodiment of the application provides a video intelligent analysis method and a video intelligent analysis system for intelligent airport service, which are used for obtaining service area division information; obtaining first area information according to the service area division information; obtaining a first area service type according to the first area information; obtaining first area video information; acquiring image extraction characteristic information according to the first regional service type; extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics; obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information; inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result; and obtaining a first area service instruction according to the first area analysis result. The first area service instruction is to perform a targeted operation according to the first area analysis result and the first area service type. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the service range is enlarged. Therefore, the technical problems that the airport video monitoring system in the prior art is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene and is limited in service range are solved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart illustrating a video intelligent analysis method for intelligent airport service according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video intelligent analysis system of an intelligent airport service according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a first constructing unit 13, a third obtaining unit 14, a first acquiring unit 15, a fourth obtaining unit 16, a second acquiring unit 17, a first executing unit 18, a second executing unit 19, a fifth obtaining unit 20, a first executing unit 21, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides an intelligent video analysis method and system for intelligent airport service, and solves the technical problems that an airport video monitoring system in the prior art is mainly used for security monitoring, is low in activity, cannot perform target service monitoring analysis in a dynamic scene, and is limited in service range. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the service range is enlarged. Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
With the development of information technology and internet, airports have gained a great deal of development as the most direct and convenient transportation mode among cities, and the main situation and challenge are faced by airport security and security work. In the field of civil aviation, safety and efficiency are always the focus of attention of all parties. How to build an intelligent security system and provide convenient, quick and comfortable experience for passengers has become a new direction for intelligent construction of various large airports in China. However, in the prior art, the airport video monitoring system is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene, and has the technical problem of limited service range.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
obtaining service area division information; obtaining first area information according to the service area division information; obtaining a first area service type according to the first area information; obtaining first area video information; acquiring image extraction characteristic information according to the first regional service type; extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics; obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information; inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result; and obtaining a first area service instruction according to the first area analysis result. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the service range is enlarged.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a method for intelligently analyzing videos of an intelligent airport service, where the method includes:
step S100: obtaining service area division information;
specifically, the airport is divided into service areas according to factors in the aspects of the structure, the function, the partition, the camera monitoring range and the like of the airport, each service area corresponds to different service requirements and service contents, for example, the airport is divided into a waiting area, an airport parking area, a registration passage area, a boarding gate queuing area, a security inspection area, a customer rest area, a mother and infant service area and the like, and different areas correspond to different service groups, service requirements, security inspection requirements and the like.
Step S200: obtaining first area information according to the service area division information;
specifically, the first area information is one area in the airport service area division, and for any one area, there may be a difference in the corresponding analysis event, analysis object and target according to the area difference, so the corresponding service type is determined by each different area information.
Step S300: obtaining a first area service type according to the first area information;
step S400: obtaining first area video information;
specifically, when the area information is determined, different service contents are corresponding to different areas, the first area service type is a description of the service contents corresponding to the area, the first area service type may be a single type or a plurality of types, and when the first area service type is a plurality of types, each service type is analyzed in sequence, or corresponding processing may be performed on the corresponding service contents. And acquiring the video information in the first area through the monitoring camera equipment, wherein the first area video information is the video information of the first area acquired by the monitoring camera equipment.
Step S500: acquiring image extraction characteristic information according to the first regional service type;
specifically, different image extraction characteristic information is corresponding to a first area service type, the image extraction characteristic is an expression form of processing data corresponding to the service type, for example, the first area is a boarding gate, the service type of the boarding gate comprises the contents of people counting, face recognition, burst state detection and the like, people who queue at the boarding gate need to be marked and counted for the people counting service type, and the image extraction characteristic information is the mark of people who queue and the counting of the people; and (3) identifying the service type of the human face, wherein the required image extraction characteristic information is a human face image. Different service requirements correspond to different image analysis objects.
Step S600: extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics;
specifically, feature extraction is performed on image information in the video information of the first region according to image extraction feature information to obtain image features corresponding to the first region and meeting service requirements, the image features are first region image features, such as a face recognition service type, the used image extraction feature information is face region image information, and the first region image features are all face region image information in the video information of the region obtained by extracting faces appearing in the first region video information.
Step S700: obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information;
step S800: inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result;
further, the first area image feature and the first area service analysis criterion are input into a first training model to obtain a first area analysis result, and step S800 includes:
step S810: inputting the first region image characteristics and the first region service analysis standard into a first training model as input information;
step S820: the first training model is obtained by training a plurality of groups of training data to convergence, wherein each group of data in the plurality of groups of training data comprises the first region image feature, the first region service analysis standard and identification information for identifying a first region analysis result;
step S830: obtaining an output result of the first training model, the output result including the first region analysis result.
Specifically, the first regional service analysis criterion is an evaluation criterion of service operation analysis corresponding to the region, that is, which analysis results are found through video analysis, target information to be found is used as the first regional service analysis criterion, and if a user with an identity problem is found for face identification in the service type, the first regional service analysis criterion blacklists face image information of the user as the first regional service analysis criterion of the region. And performing data comparison and analysis processing on the first area image characteristics and the first area service analysis standard to obtain a corresponding data analysis result, wherein the first area analysis result is a processing result obtained by performing video analysis on the video information according to the service content corresponding to the area. If the blacklist user is found by face recognition, the first area analysis result is that the blacklist user is found or is found. And if the people number statistics is needed, performing corresponding people number statistics on the first area image characteristics according to the standard of the people number statistics to obtain a first area analysis result, namely the people number obtained by the area statistics. The first area analysis result may be said to be a video analysis result obtained from the service content performed in the area. In order to improve the accuracy of a video analysis result, in the embodiment of the application, a deep Neural network model is constructed and processed, and a mathematical model is used for operation processing, so that the operation speed is improved, and the accuracy of an extraction result is improved. Neural network models are described based on mathematical models of neurons. Artificial Neural Networks (Artificial Neural Networks) are a description of the first-order properties of the human brain system. Briefly, it is a mathematical model. And inputting the first region image characteristics and the first region service analysis standard into a neural network model through training of a large amount of training data, and outputting a first region analysis result.
More specifically, the training process is a supervised learning process, each set of supervised data includes the first area image feature, the first area service analysis standard, and identification information identifying the first area analysis result, the first area image feature and the first area service analysis standard are input into a neural network model, the neural network model performs continuous self-correction and adjustment according to the identification information identifying the first area analysis result, and the set of supervised learning is ended until the obtained output result is consistent with the identification information, and the next set of supervised learning is performed; and when the output information of the neural network model reaches the preset accuracy rate/reaches the convergence state, finishing the supervised learning process. Through the supervision and learning of the neural network model, the neural network model is enabled to process the input information more accurately, so that a more accurate and suitable first region analysis result is obtained, and further, the purpose that the region service type is passed, the region video is subjected to corresponding feature extraction analysis to obtain a corresponding service analysis result is achieved, the video analysis is utilized, the service activity of a monitoring system is deepened, dynamic service supervision is simultaneously carried out, the airport service efficiency and the airport service level are improved, meanwhile, the efficiency and the accuracy of a data operation processing result are improved by adding the neural network model, and a foundation is tamped for providing more accurate and reliable service items.
Step S900: and obtaining a first area service instruction according to the first area analysis result.
Specifically, corresponding service operation is carried out according to a video analysis result, namely a first area analysis result, a first area service instruction is carried out according to the first area video analysis result, if a white list and a black list database are established for face identification corresponding to a first area service type, the database is synchronously updated according to a detection result, the first area service instruction is carried out for database updating, if the first area service type is that a user with an identity problem is found, a face identification mark in a video is compared with the black list or comparison face information through a face identification algorithm to find a face matched with the face identification mark, the user with the identity problem is quickly tracked and searched, the public security system is helped to reconnoiter and the like, the first area service instruction is used for carrying out position locking alarm or early warning operation on the user, the first area service instruction is to perform a targeted operation according to the first area analysis result and the first area service type. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the service range is enlarged. Therefore, the technical problems that the airport video monitoring system in the prior art is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene and is limited in service range are solved.
Further, the method comprises:
step S1010: obtaining a face detection algorithm;
step S1020: carrying out face detection on the first area video information according to the face detection algorithm to obtain first face area information;
step S1030: inputting the first face region information into a face recognition neural network model to obtain face recognition identity information;
step S1040: obtaining a face database, wherein the face database comprises a first database and a second database;
step S1050: obtaining an identity authentication result according to the face identification identity information and the face database;
step S1060: and when the identity authentication result is a first type result, the first type result is that the identity authentication result and the first database are successfully authenticated, and first alarm information is obtained.
Specifically, when the first regional service type is face analysis, a face region is obtained from a first regional video picture through a face detection algorithm, the face region is marked, face information in first regional video information is obtained, and the first face region information is a face image information set obtained by performing face detection on the first regional video information through the face detection algorithm. Then a network model is constructed by using a deep neural network, a face recognition neural network model is the deep neural network model, face recognition is carried out on first face region information by a face recognition algorithm, identity information corresponding to the face region image information is determined to be face recognition identity information, finally, the face recognition identity information is compared with a face database to obtain a face comparison result, the face database comprises a first database, namely a blacklist database, the second database is a whitelist database, users with white lists as recognition results are stored in the whitelist database, namely the second database, no operation is carried out on the existing users, users without recognition results are stored in the database, the database is updated, if a result matched with the blacklist is obtained in the face recognition, the first class of results are matched with the blacklist database, at the moment, the alarm is given, so that the alarm can be used for assisting judicial departments such as police, courts and the like, and the missing population can be searched. In conclusion, the face identity recognition function is to acquire a video through a front-end monitoring camera device, obtain a face area from a video picture through a face detection algorithm, and confirm the face identity through a face recognition algorithm based on a deep neural network. Can be applied to 1: 1, comparing the testimonies of people; 1: searching the images by using the images; and (4) setting up a control rule based on a black list, a white list and the like of the face library, and realizing real-time alarm and information prompt. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the technical effect of service range is increased. The airport video monitoring system solves the technical problems that in the prior art, an airport video monitoring system is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene, and is limited in service range.
Further, the obtaining an identity authentication result according to the face identification identity information and the face database further includes:
step S1110: acquiring a preset authentication requirement;
step S1120: obtaining a matching database according to the preset authentication requirement and the face database;
step S1130: obtaining comparative face feature information according to the matching database;
step S1140: obtaining a first comparison result according to the face identification identity information and the comparison face characteristic information;
step S1150: and obtaining the identity authentication result according to the first comparison result.
Specifically, when performing face analysis and recognition, the requirements of face recognition can be further refined, the preset authentication requirements, that is, the requirements and targets of face recognition, are preset, if the missing person can be searched, the face database is the face database of the target search crowd, if the preset authentication requirements are to assist the police to search for the person to escape, the face database is the face database content of the police tracker in the blacklist, the matching database is matched to the corresponding database according to the preset authentication requirements, the matching database is the face database matched according to the preset authentication requirements, or the face information can be compared, the face image information to be searched is determined from the face database, the face feature information, that is, the face feature of the person to be searched is compared, the comparison between the face feature information and the face recognition identity information is utilized to perform comparison and recognition, a first comparison result is obtained, the first comparison result has matching success and matching failure, and obtaining a corresponding identity authentication result according to the comparison result, obtaining a corresponding identity authentication result when the matching is successful, carrying out corresponding processing according to a preset authentication requirement, and sending reminding information or alarm information.
Further, the method comprises:
step 1210: acquiring a preset limb change rule early warning database according to the first area information;
step S1220: obtaining the characteristic information of the limb change rule according to the preset limb change rule early warning database;
step S1230: acquiring video identification algorithm information according to the limb change rule characteristic information;
step S1240: obtaining a limb change detection result according to the first area video information and the video identification algorithm information;
step S1250: and when the limb change detection result exists, first early warning information is obtained.
Specifically, the embodiment of the application has a gesture and limb movement recognition function, performs corresponding gesture and limb movement recognition requirements according to the characteristics and the area functions of first area information, such as an airport restricted area, sends an alarm when people appear, if the first area is a waiting hall, the gesture and limb movement can be set to be actions of falling down, putting on the shelf and the like, a preset limb change rule early warning database is a corresponding limb and gesture detection range set according to the characteristics of each area, one area may correspond to a plurality of gestures, feature extraction and detection contents of video analysis are different according to different limb change rules, a corresponding video recognition algorithm is determined according to the corresponding limb change rule, the isomorphic video recognition algorithm analyzes the video information of the first area to obtain a corresponding limb change detection result, and when the corresponding limb change is found, and sending early warning information to remind a management department to pay attention. The service type can be used for behavior recognition of forbidden zones, gesture recognition of characteristic zones and the like. Behavior recognition (forbidden zone, tripwire, loitering, legacy and the like) is used for monitoring the behavior of personnel entering the forbidden zone and the tripwire in real time and carrying out early warning or alarming; tracking personnel entering the forbidden zone, realizing quantity statistics, entering time statistics and the like, and carrying out early warning or alarming; the wandering detection is carried out on personnel entering a specific monitoring area, and the wandering time and the wandering behavior can be set autonomously; and detecting the remnants in the specific monitoring area, and carrying out early warning or alarming after the remnants are left for a period of time. Recognizing the posture and the limb actions: identifying the special postures of the individual personnel in the specific area, such as behaviors of making a call, falling down the ground, putting up a stand, running, squatting and the like, and giving an alarm or giving an early warning; and carrying out early warning or alarm on the gathering, fighting and other behaviors of group personnel in a specific area.
Further, the method comprises:
step 1310: obtaining the regional crowd density requirement according to the first regional information;
step S1320: when the area crowd density requires more than two, obtaining partition information;
step S1330: partitioning the first region video information according to the partitioning information to obtain a partitioning video set, wherein the partitioning video corresponds to the region crowd density requirement;
step S1340: obtaining a first partitioned video according to the partitioned video set, wherein the first partitioned video corresponds to a first partitioned crowd density requirement;
step S1350: obtaining a portrait recognition algorithm;
step S1360: obtaining a first partition portrait recognition result according to the first partition video and the portrait recognition algorithm;
step S1370: obtaining the crowd density of the first subarea according to the recognition result of the portrait of the first subarea;
step S1380: and when the crowd density of the first subarea exceeds the crowd density requirement of the first subarea, second early warning information is obtained.
Particularly, some regions of the airport have the requirement for carrying out the statistics of crowd density and quantity, automatic identification and statistics can be carried out through video analysis, manual operation is not needed, the efficiency is low, and manpower is wasted. The video face analysis is carried out by utilizing the video information of the first area and the crowd density requirement set by the area, if the density and the number of people counted at the face position appearing in the video in the area exceed the crowd density requirement of the area, corresponding early warning or alarming is carried out, when the range of the first area is wider and different density requirements exist, the area setting can be carried out, the density and the number of people of a plurality of areas can be analyzed, and a video picture can also be carried out for a plurality of areas. And when the density or the number of people in the subarea picture exceeds the standard, sending out corresponding early warning to prompt a manager to pay attention to the management.
Further, the method comprises:
step S1410: obtaining first marking information according to the first partition portrait recognition result, wherein the first marking information is used for marking the portrait of the first partition;
step S1420: obtaining a marking statistical result according to the first marking information;
step S1430: and obtaining the information of the number of people in the first subarea according to the mark statistical result.
Specifically, the deep learning technology is adopted to identify the person in the subarea, the identified person is marked, each person is marked as a passenger, then the marks are counted to obtain the information of the number of people in the subarea, the regional population density and the number of people are counted, and the number of people passing through the passage can be counted in real time by the user at a registration port or in the passage by adopting the deep learning technology, wherein the passage comprises a two-way passage, a single-item passage and the like. The video analysis functionality is enriched, the analysis capability and efficiency of monitoring videos are improved, the range of commanding airport service is improved, and the service content is wider.
Further, the method comprises:
step S1510: obtaining a preset parking space area;
step S1520: acquiring video information of a second area according to the preset halt as an area;
step S1530: acquiring airplane position information according to the second area video information;
step S1540: obtaining parking space information according to the preset parking space area and the second area video information;
step S1550: obtaining second marking information according to the parking space information and the second area video information, wherein the second marking information is used for marking the parking space;
step S1560: and obtaining the result of the entrance and the departure of the airplane according to the airplane position information and the second mark information.
Specifically, the method can also be used for carrying out video acquisition on the airplane in the apron by utilizing the monitoring camera equipment which is out of the apron, detecting the entrance and the departure of the airplane, the upper and the lower wheel blocks and opening and closing the cabin door through video analysis, automatically detecting the entrance and the departure time of the airplane in real time, recording the exact time of the entrance and the departure time of the airplane, and monitoring the placing time, the taking time and the cabin door opening and closing time of the upper and the lower wheel blocks. The method comprises the steps that the position of an airplane is detected, identified and marked through video information by means of a deep learning algorithm, the position of a preset parking space is correspondingly marked, the position of the airplane subjected to deep learning analysis is intersected with a detection area and is operated, whether the airplane enters or leaves the parking apron area is judged, when the position of the airplane is combined and overlapped with the position of the preset parking space, the airplane enters the parking space, and if the position of the airplane deviates from the position of the preset parking space, the airplane is located at the parking space. Can go into the off-position state to the aircraft according to video analysis result and detect, realize diversified detection management, help managers to carry out overall arrangement. Similarly, the opening and closing of the passenger cabin door are detected, video information is utilized for analysis, the analysis service is started after the airplane is detected to be in position, and if the passenger cabin door is detected to be opened, the passenger cabin door is reported to be opened; and when the opening state is not detected, the door of the passenger sending cabin is not opened.
Further, after obtaining the result of the airplane entering and leaving the position, the method includes:
step S1610: when the airplane entering and leaving result is an airplane entering and locating result, the airplane entering and locating result is that the airplane position is combined and connected with a second mark, and a corridor bridge detection instruction is obtained;
step S1620: when the corridor bridge detection instruction is received, acquiring corridor bridge position information according to the preset parking space area;
step S1630: acquiring an airplane detection mark position according to the airplane position information;
step S1640: acquiring a corridor bridge detection mark position according to the corridor bridge position information;
step S1650: judging whether the aircraft detection mark position is connected with the corridor bridge detection mark position or not;
step S1660: when the connection is not established, first reminding information is obtained.
Specifically, when the aircraft enters the entrance position from the entrance position result position, a corridor bridge detection function is automatically started, monitoring camera equipment is called to analyze and mark the corridor bridge position and the aircraft interface position, when a pre-calibration area, namely the corridor bridge detection mark position is the same as the entrance position detection area, whether the corridor bridge is hung is judged by identifying the corridor bridge position. The method and the device realize detection of the aircraft corridor bridge connection process by utilizing video image analysis during video monitoring, report that connection is completed when the detection result is successful, and send reminding information to remind operators and managers of paying attention and intervening if the detection result is unsuccessful, so as to ensure complete and accurate work flow. The video analysis is fully utilized to complete behavior analysis and scene understanding on the target in the dynamic scene, so that the capability and the efficiency of a video monitoring system are improved, high-level description of the monitoring scene can be performed, real-time analysis can be performed on the monitoring scene, the safety management level of an airport is improved, safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the technical effect of service range is increased. The airport video monitoring system further solves the technical problems that in the prior art, an airport video monitoring system is mainly used for security monitoring, the motility is weak, the target service monitoring analysis of a dynamic scene cannot be carried out, and the service range is limited.
Example two
Based on the same inventive concept as the video intelligent analysis method of the intelligent airport service in the foregoing embodiment, the present invention further provides a video intelligent analysis system of the intelligent airport service, as shown in fig. 2, the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain service area division information;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain first area information according to the service area division information;
a third obtaining unit 13, where the third obtaining unit 13 is configured to obtain a first regional service type according to the first regional information;
a fourth obtaining unit 14, wherein the fourth obtaining unit 14 is configured to obtain the first area video information;
a fifth obtaining unit 15, where the fifth obtaining unit 15 is configured to obtain image extraction feature information according to the first regional service type;
a first extraction unit 16, where the first extraction unit 16 is configured to perform feature extraction on the first region video information according to the image extraction feature information to obtain a first region image feature;
a sixth obtaining unit 17, where the sixth obtaining unit 17 is configured to obtain a first regional service analysis standard according to the first regional service type and the image extraction feature information;
a first executing unit 18, where the first executing unit 18 is configured to input the first regional image feature and the first regional service analysis criterion into a first training model, and obtain a first regional analysis result;
a seventh obtaining unit 19, where the seventh obtaining unit 19 is configured to obtain a first area service instruction according to the first area analysis result.
Further, the system further comprises:
an eighth obtaining unit, configured to obtain a face detection algorithm;
a ninth obtaining unit, configured to perform face detection on the first area video information according to the face detection algorithm to obtain first face area information;
the second execution unit is used for inputting the first face region information into a face recognition neural network model to obtain face recognition identity information;
a tenth obtaining unit, configured to obtain a face database, where the face database includes a first database and a second database;
an eleventh obtaining unit, configured to obtain an identity authentication result according to the face identification identity information and the face database;
and the twelfth obtaining unit is used for obtaining first alarm information when the identity authentication result is a first-class result, and the first-class result is the identity authentication result and the first database authentication is successful.
Further, the system further comprises:
a thirteenth obtaining unit configured to obtain a preset authentication requirement;
a fourteenth obtaining unit, configured to obtain a matching database according to the preset authentication requirement and the face database;
a fifteenth obtaining unit, configured to obtain comparative face feature information according to the matching database;
a sixteenth obtaining unit, configured to obtain a first comparison result according to the face identification identity information and the comparison face feature information;
a seventeenth obtaining unit, configured to obtain the identity authentication result according to the first comparison result.
Further, the system further comprises:
an eighteenth obtaining unit, configured to obtain a preset limb change rule early warning database according to the first area information;
a nineteenth obtaining unit, configured to obtain, according to the preset limb change rule early warning database, feature information of a limb change rule;
a twentieth obtaining unit, configured to obtain video identification algorithm information according to the limb change rule feature information;
a twenty-first obtaining unit, configured to obtain a limb change detection result according to the first area video information and the video identification algorithm information;
a twenty-second obtaining unit, configured to obtain first warning information when the limb change detection result exists.
Further, the system further comprises:
a twenty-third obtaining unit, configured to obtain a regional crowd density requirement according to the first regional information;
a twenty-fourth obtaining unit configured to obtain partition information when there are more than two of the regional population density requirements;
a twenty-fifth obtaining unit, configured to partition the first area video information according to the partition information to obtain a partition video set, where the partition video corresponds to the requirement on the area crowd density;
a twenty-sixth obtaining unit, configured to obtain a first partition video according to the partition video set, where the first partition video corresponds to a first partition crowd density requirement;
a twenty-seventh obtaining unit, configured to obtain a portrait recognition algorithm;
a twenty-eighth obtaining unit, configured to obtain a first partition portrait recognition result according to the first partition video and the portrait recognition algorithm;
a twenty-ninth obtaining unit, configured to obtain a crowd density of a first partition according to the portrait recognition result of the first partition;
a thirtieth obtaining unit, configured to obtain second warning information when the first segment crowd density exceeds the first segment crowd density requirement.
Further, the system further comprises:
the first marking unit is used for obtaining first marking information according to the first partition portrait recognition result, and the first marking information is used for marking the portrait of the first partition;
a thirty-first obtaining unit, configured to obtain a tag statistical result according to the first tag information;
a thirty-second obtaining unit, configured to obtain the information on the number of people in the first partition according to the mark statistical result.
Further, the system further comprises:
a thirty-third obtaining unit, configured to obtain a preset parking space area;
a thirty-fourth obtaining unit, configured to obtain second area video information according to the preset outage as an area;
a thirty-fifth obtaining unit, configured to obtain aircraft position information according to the second area video information;
a thirty-sixth obtaining unit, configured to obtain the parking place information according to the preset parking place area and the second area video information;
the second marking unit is used for obtaining second marking information according to the stand information and the second area video information, and the second marking information is used for marking the stand;
a thirty-seventh obtaining unit, configured to obtain an aircraft in-and-out-of-position result according to the aircraft position information and the second mark information.
Further, the system further comprises:
a thirty-eighth obtaining unit, configured to, when the aircraft entering and leaving result is an aircraft entering result, obtain a gallery bridge detection instruction by merging and connecting the aircraft position and a second mark as the aircraft entering result;
a thirty-ninth obtaining unit, configured to obtain, when the corridor bridge detection instruction is received, corridor bridge position information according to the preset parking space area;
a fortieth obtaining unit, configured to obtain an aircraft detection mark position according to the aircraft position information;
a forty-first obtaining unit, configured to obtain a corridor bridge detection mark position according to the corridor bridge position information;
the first judging unit is used for judging whether the aircraft detection mark position is connected with the gallery bridge detection mark position;
a forty-second obtaining unit, configured to obtain the first reminder information when the connection is not established.
Various modifications and embodiments of the aforementioned video intelligent analysis method for intelligent airport service in the first embodiment of fig. 1 are also applicable to the video intelligent analysis system for intelligent airport service in the present embodiment, and through the aforementioned detailed description of the video intelligent analysis method for intelligent airport service, those skilled in the art can clearly understand the implementation method of the video intelligent analysis system for intelligent airport service in the present embodiment, so for the sake of brevity of the description, detailed descriptions thereof are omitted here.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to fig. 3.
Fig. 3 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of the intelligent video analysis method for intelligent airport service in the aforementioned embodiment, the present invention further provides an intelligent video analysis system for intelligent airport service, which has a computer program stored thereon, and when the program is executed by a processor, the method realizes the steps of any one of the aforementioned intelligent video analysis methods for intelligent airport service.
Where in fig. 3 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the embodiment of the application provides a video intelligent analysis method and a video intelligent analysis system for intelligent airport service, which are used for obtaining service area division information; obtaining first area information according to the service area division information; obtaining a first area service type according to the first area information; obtaining first area video information; acquiring image extraction characteristic information according to the first regional service type; extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics; obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information; inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result; and obtaining a first area service instruction according to the first area analysis result. The first area service instruction is to perform a targeted operation according to the first area analysis result and the first area service type. The method has the advantages that the new generation information technology of video analysis is fully utilized, the behavior analysis and scene understanding of the target in the dynamic scene can be completed, the capability and the efficiency of a video monitoring system are improved, the high-level description of the monitoring scene can be carried out, the monitoring scene can be analyzed in real time, the safety management level of an airport is improved, the safe airport and flight environment are created, the service content of an intelligent airport is enriched, and the service range is enlarged. Therefore, the technical problems that the airport video monitoring system in the prior art is mainly used for security monitoring, has weak activity, cannot perform target service monitoring analysis of a dynamic scene and is limited in service range are solved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for intelligent analysis of video for intelligent airport services, wherein the method comprises:
obtaining service area division information;
obtaining first area information according to the service area division information;
obtaining a first area service type according to the first area information;
obtaining first area video information;
acquiring image extraction characteristic information according to the first regional service type;
extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics;
obtaining a first regional service analysis standard according to the first regional service type and the image extraction feature information;
inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result;
and obtaining a first area service instruction according to the first area analysis result.
2. The method of claim 1, wherein the method comprises:
obtaining a face detection algorithm;
carrying out face detection on the first area video information according to the face detection algorithm to obtain first face area information;
inputting the first face region information into a face recognition neural network model to obtain face recognition identity information;
obtaining a face database, wherein the face database comprises a first database and a second database;
obtaining an identity authentication result according to the face identification identity information and the face database;
and when the identity authentication result is a first type result, the first type result is that the identity authentication result and the first database are successfully authenticated, and first alarm information is obtained.
3. The method of claim 2, wherein obtaining the identity authentication result according to the face recognition identity information and the face database comprises:
acquiring a preset authentication requirement;
obtaining a matching database according to the preset authentication requirement and the face database;
obtaining comparative face feature information according to the matching database;
obtaining a first comparison result according to the face identification identity information and the comparison face characteristic information;
and obtaining the identity authentication result according to the first comparison result.
4. The method of claim 1, wherein the method comprises:
acquiring a preset limb change rule early warning database according to the first area information;
obtaining the characteristic information of the limb change rule according to the preset limb change rule early warning database;
acquiring video identification algorithm information according to the limb change rule characteristic information;
obtaining a limb change detection result according to the first area video information and the video identification algorithm information;
and when the limb change detection result exists, first early warning information is obtained.
5. The method of claim 1, wherein the method comprises:
obtaining the regional crowd density requirement according to the first regional information;
when the area crowd density requires more than two, obtaining partition information;
partitioning the first region video information according to the partitioning information to obtain a partitioning video set, wherein the partitioning video corresponds to the region crowd density requirement;
obtaining a first partitioned video according to the partitioned video set, wherein the first partitioned video corresponds to a first partitioned crowd density requirement;
obtaining a portrait recognition algorithm;
obtaining a first partition portrait recognition result according to the first partition video and the portrait recognition algorithm;
obtaining the crowd density of the first subarea according to the recognition result of the portrait of the first subarea;
and when the crowd density of the first subarea exceeds the crowd density requirement of the first subarea, second early warning information is obtained.
6. The method of claim 5, wherein the method comprises:
obtaining first marking information according to the first partition portrait recognition result, wherein the first marking information is used for marking the portrait of the first partition;
obtaining a marking statistical result according to the first marking information;
and obtaining the information of the number of people in the first subarea according to the mark statistical result.
7. The method of claim 1, wherein the method comprises:
obtaining a preset parking space area;
acquiring video information of a second area according to the preset halt as an area;
acquiring airplane position information according to the second area video information;
obtaining parking space information according to the preset parking space area and the second area video information;
obtaining second marking information according to the parking space information and the second area video information, wherein the second marking information is used for marking the parking space;
and obtaining the result of the entrance and the departure of the airplane according to the airplane position information and the second mark information.
8. The method of claim 7, wherein obtaining the aircraft ingress and egress result comprises:
when the airplane entering and leaving result is an airplane entering and locating result, the airplane entering and locating result is that the airplane position is combined and connected with a second mark, and a corridor bridge detection instruction is obtained;
when the corridor bridge detection instruction is received, acquiring corridor bridge position information according to the preset parking space area;
acquiring an airplane detection mark position according to the airplane position information;
acquiring a corridor bridge detection mark position according to the corridor bridge position information;
judging whether the aircraft detection mark position is connected with the corridor bridge detection mark position or not;
when the connection is not established, first reminding information is obtained.
9. An intelligent video analysis system for intelligent airport service, which is applied to the method of any one of claims 1-8, wherein the system comprises:
a first obtaining unit configured to obtain service area division information;
a second obtaining unit, configured to obtain first area information according to the service area division information;
a third obtaining unit, configured to obtain a first regional service type according to the first regional information;
a fourth obtaining unit configured to obtain first region video information;
a fifth obtaining unit, configured to obtain image extraction feature information according to the first regional service type;
the first extraction unit is used for extracting the characteristics of the first area video information according to the image extraction characteristic information to obtain first area image characteristics;
a sixth obtaining unit, configured to obtain a first regional service analysis standard according to the first regional service type and the image extraction feature information;
the first execution unit is used for inputting the first area image characteristics and the first area service analysis standard into a first training model to obtain a first area analysis result;
a seventh obtaining unit, configured to obtain a first area service instruction according to the first area analysis result.
10. A video intelligence analysis system for intelligent airport services comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the program.
CN202110446349.8A 2021-04-25 2021-04-25 Intelligent video analysis method and system for intelligent airport service Pending CN113177459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110446349.8A CN113177459A (en) 2021-04-25 2021-04-25 Intelligent video analysis method and system for intelligent airport service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110446349.8A CN113177459A (en) 2021-04-25 2021-04-25 Intelligent video analysis method and system for intelligent airport service

Publications (1)

Publication Number Publication Date
CN113177459A true CN113177459A (en) 2021-07-27

Family

ID=76925078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110446349.8A Pending CN113177459A (en) 2021-04-25 2021-04-25 Intelligent video analysis method and system for intelligent airport service

Country Status (1)

Country Link
CN (1) CN113177459A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762163A (en) * 2021-09-09 2021-12-07 杭州澳亚生物技术股份有限公司 GMP workshop intelligent monitoring management method and system
CN114596526A (en) * 2022-03-02 2022-06-07 捻果科技(深圳)有限公司 Airport safety linkage early warning method and system based on video analysis technology
CN114998813A (en) * 2022-08-04 2022-09-02 江苏三棱智慧物联发展股份有限公司 Video monitoring service method and platform for cloud service
CN117274918A (en) * 2023-11-23 2023-12-22 深圳市易图资讯股份有限公司 Big data metering regional analysis system and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020109625A1 (en) * 2001-02-09 2002-08-15 Philippe Gouvary Automatic method of tracking and organizing vehicle movement on the ground and of identifying foreign bodies on runways in an airport zone
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
WO2019232831A1 (en) * 2018-06-06 2019-12-12 平安科技(深圳)有限公司 Method and device for recognizing foreign object debris at airport, computer apparatus, and storage medium
CN110580446A (en) * 2019-07-16 2019-12-17 上海交通大学 Behavior semantic subdivision understanding method, system, computer device and medium
WO2020168960A1 (en) * 2019-02-19 2020-08-27 杭州海康威视数字技术股份有限公司 Video analysis method and apparatus
CN112329592A (en) * 2020-10-30 2021-02-05 北京百度网讯科技有限公司 Airport collaborative decision-making method, device, equipment and storage medium
CN112530205A (en) * 2020-11-23 2021-03-19 北京正安维视科技股份有限公司 Airport parking apron airplane state detection method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020109625A1 (en) * 2001-02-09 2002-08-15 Philippe Gouvary Automatic method of tracking and organizing vehicle movement on the ground and of identifying foreign bodies on runways in an airport zone
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
WO2019232831A1 (en) * 2018-06-06 2019-12-12 平安科技(深圳)有限公司 Method and device for recognizing foreign object debris at airport, computer apparatus, and storage medium
WO2020168960A1 (en) * 2019-02-19 2020-08-27 杭州海康威视数字技术股份有限公司 Video analysis method and apparatus
CN110580446A (en) * 2019-07-16 2019-12-17 上海交通大学 Behavior semantic subdivision understanding method, system, computer device and medium
CN110543867A (en) * 2019-09-09 2019-12-06 北京航空航天大学 crowd density estimation system and method under condition of multiple cameras
CN112329592A (en) * 2020-10-30 2021-02-05 北京百度网讯科技有限公司 Airport collaborative decision-making method, device, equipment and storage medium
CN112530205A (en) * 2020-11-23 2021-03-19 北京正安维视科技股份有限公司 Airport parking apron airplane state detection method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762163A (en) * 2021-09-09 2021-12-07 杭州澳亚生物技术股份有限公司 GMP workshop intelligent monitoring management method and system
CN114596526A (en) * 2022-03-02 2022-06-07 捻果科技(深圳)有限公司 Airport safety linkage early warning method and system based on video analysis technology
CN114596526B (en) * 2022-03-02 2024-09-06 捻果科技(深圳)有限公司 Airport security linkage early warning method and system based on video analysis technology
CN114998813A (en) * 2022-08-04 2022-09-02 江苏三棱智慧物联发展股份有限公司 Video monitoring service method and platform for cloud service
CN117274918A (en) * 2023-11-23 2023-12-22 深圳市易图资讯股份有限公司 Big data metering regional analysis system and method
CN117274918B (en) * 2023-11-23 2024-03-22 深圳市易图资讯股份有限公司 Big data metering regional analysis system and method

Similar Documents

Publication Publication Date Title
CN113177459A (en) Intelligent video analysis method and system for intelligent airport service
CN110334563B (en) Community security management method and system based on big data
Fusier et al. Video understanding for complex activity recognition
Wiliem et al. A suspicious behaviour detection using a context space model for smart surveillance systems
CN107808502B (en) A kind of image detection alarm method and device
CN105184258A (en) Target tracking method and system and staff behavior analyzing method and system
CN109165620A (en) A kind of detection method of electric vehicle, system and terminal device
CN109299642A (en) Logic based on Identification of Images is deployed to ensure effective monitoring and control of illegal activities early warning system and method
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
CN110717357B (en) Early warning method and device, electronic equipment and storage medium
CN111488803A (en) Airport target behavior understanding system integrating target detection and target tracking
CN102902960A (en) Leave-behind object detection method based on Gaussian modelling and target contour
CN114612860A (en) Computer vision-based passenger flow identification and prediction method in rail transit station
CN112232316B (en) Crowd gathering detection method and device, electronic equipment and storage medium
US12112273B2 (en) Measuring risk within a media scene
CN114612813A (en) Identity recognition method, model training method, device, equipment and storage medium
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN117746338B (en) Property park safety management method and system based on artificial intelligence
CN112633262B (en) Channel monitoring method and device, electronic equipment and medium
CN109712289A (en) A kind of garden security information processing method and processing device based on intelligent management platform
US10769449B2 (en) Dynamic method and system for monitoring an environment
CN117354468A (en) Intelligent state sensing system and method based on big data
CN114821785A (en) Method for identifying potential criminals in sudden manner based on assumed evaluation and Bayesian learning
CN113505759B (en) Multitasking method, multitasking device and storage medium
CN114493322A (en) Passenger transport center area monitoring and alarming method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210727