CN111460920A - Target tracking and segmenting system for complex scene of airport - Google Patents

Target tracking and segmenting system for complex scene of airport Download PDF

Info

Publication number
CN111460920A
CN111460920A CN202010177894.7A CN202010177894A CN111460920A CN 111460920 A CN111460920 A CN 111460920A CN 202010177894 A CN202010177894 A CN 202010177894A CN 111460920 A CN111460920 A CN 111460920A
Authority
CN
China
Prior art keywords
target
tracking
image
airport
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010177894.7A
Other languages
Chinese (zh)
Inventor
赵丽
张笑钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Big Data And Information Technology Research Institute Of Wenzhou University
Original Assignee
Big Data And Information Technology Research Institute Of Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Big Data And Information Technology Research Institute Of Wenzhou University filed Critical Big Data And Information Technology Research Institute Of Wenzhou University
Priority to CN202010177894.7A priority Critical patent/CN111460920A/en
Publication of CN111460920A publication Critical patent/CN111460920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Abstract

The invention provides a target tracking and segmenting system for complex airport scenes, which comprises: the system comprises an auxiliary monitoring module, a target data receiving module, a target segmentation and tracking module, a mobile display terminal and an information fusion decision module; the auxiliary monitoring module comprises a scene radar monitoring unit and a panoramic video monitoring unit; the target data receiving module is used for sending the corrected, converted and calibrated image to the target segmentation and tracking module; the target segmentation and tracking module is used for carrying out target segmentation detection and identification tracking and sending a tracking and identification result of a target and motion information of the target to the information fusion decision module; the mobile display terminal comprises a data acquisition unit, a display unit and an emergency alarm unit, can realize real-time identification and tracking of a plurality of targets in a scene, and has higher robustness.

Description

Target tracking and segmenting system for complex scene of airport
Technical Field
The invention relates to the technical field of airport scene activity monitoring, in particular to a target tracking and segmenting system for an airport complex scene.
Background
The high-speed development of the air transportation industry leads to a more complex operation environment of a scene, collision conflict between airplanes and vehicles inevitably exists in daily operation, and is more serious under the condition of low visibility such as heavy fog, the traditional airport scene monitoring depends on a dispatcher or a supervisor to monitor the positions of the airplanes and the vehicles, and reports related information/tasks to a airport vehicle driver, records the states of all working links, has lower automation degree of the process, lower safety and efficiency and lack of flight information sharing, the supervisor cannot monitor the real-time positions of the vehicles and the personnel, the driver cannot see the road clearly under special conditions to influence driving, the voice broadcasting may have the problem of false reporting, most of the existing airport scene guiding and controlling systems depend on one or more sensors (scene monitoring radar and video camera) to measure and estimate the motion states of a plurality of tracked objects, but the problem that the safety accident happens when the vehicle driver and the aircraft pilot receive the alarm information in time.
In summary, how to provide a target tracking and segmenting system for an airport complex scene, which can realize high-precision real-time identification and tracking of a plurality of moving targets in an airport scene and provide timely early warning and parking space allocation for airport service vehicles and airplanes to avoid collision accidents, is a problem that needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
In order to solve the above-mentioned problems and needs, the present solution provides a target tracking and segmenting system and method for complex scenes in airports, which can solve the above technical problems by adopting the following technical solutions.
In order to achieve the purpose, the invention provides the following technical scheme: an airport complex scene-oriented target tracking and segmentation system comprises: the system comprises an auxiliary monitoring module, a target data receiving module, a target segmentation and tracking module, a mobile display terminal and an information fusion decision module;
the auxiliary monitoring module comprises a scene radar monitoring unit and a panoramic video monitoring unit, and is used for monitoring the scene activities of the airport in real time to obtain an airport panoramic image and scene target position information and sending the airport panoramic image and the scene target position information to the target data receiving module;
the target data receiving module is used for carrying out rectification conversion processing on the airport panoramic image and target position information and carrying out position calibration on a manually defined airport range, and the target data receiving module sends the rectified, converted and calibrated image to the target segmentation and tracking module;
the target segmentation and tracking module carries out target segmentation detection, identification and tracking on the image which is corrected, converted and calibrated by the target data receiving module, calculates the movement speed of the tracked target, and sends the tracking and identification result of the target and the movement information of the target to the information fusion decision module;
the mobile display terminal comprises a data acquisition unit, a display unit and an emergency alarm unit, wherein the data acquisition unit is used for acquiring scene target position information, distance alarm information and parking space real-time information sent by the information fusion decision module.
Further, the target data receiving module performs deformity correction on the received panoramic image of the airport, reads a current streaming media of the target to be tracked by using FFmpeg, decompresses the streaming media to obtain an AVFrame image of YUV three channels, converts the AVFrame image into a Mat image, processes the Mat image to obtain a digital image sequence, and outputs the digital image sequence to the target segmentation and tracking module.
Further, the target segmentation detection and identification tracking specifically includes: performing background segmentation and updating on the image, calculating a median lambda (x) and a standard deviation sigma (x) of each pixel intensity value in the video image within a certain time to construct a background, detecting a moving target in a subsequent sequence image by using the constructed background model, and periodically updating the background model; inputting an image frame, and estimating the accurate position of a target by adopting a shadow detection and removal algorithm based on a normalized correlation coefficient to remove image shadows so as to eliminate the condition that the shadows are combined with target blocks; and finally, performing multi-target tracking by adopting an MHT algorithm.
Still further, the build contextThe method specifically comprises the following steps: background model adoption at pixel x
Figure BDA0002411417860000031
Wherein each pixel x has 3 indexes including a minimum intensity value m (x), a maximum intensity value N (x), and a maximum intensity difference d (x) of consecutive frames, V is an array comprising N consecutive images, and V is a maximum intensity valuei(x) Represents the pixel value at the x position of the ith frame image, if Vi(x)-λ(x)|<2 σ (x), then Vi(x) Belonging to a stationary background pixel, i.e. Vz(x) And if not, determining to be the moving object pixel.
Further, the detecting the moving object in the subsequent sequence of images specifically includes: calculating the median d of the maximum intensity differences d (x) over all pixels in the background modeltAccording to dtFor the image I to be inspectednPerforming threshold segmentation on the image to be detected InThe pixel at the middle position x satisfies In(x)-m(x)<k*dt∨In(x)-n(x)<k*dtThen classify as background, otherwise classify as foreground, whereinn(x) Representing the pixel value at time x at n, k being the segmentation threshold parameter.
Further, the periodic updating of the background model comprises setting L a background model updating period, calculating a mean value and a variance of the previous L/2 frame data, establishing a model by using the mean value and the variance, filtering the data of the next L/2 frame according to the model, calculating a minimum intensity value, a maximum intensity difference of continuous frames and a mean value of the image of the next L/2 frame, and updating the background [ m (x), n (x), d (x) ] according to the number g (x, t) of dividing the pixel into the background in the image of the next L/2 frame, the number m (x, t) of dividing the pixel into the background and the time h (x, t) of classifying the pixel into the foreground last time.
Further, when g (x, t)>l L/2, background pixel background model [ m ] is adoptedb(x),nb(x),db(x)]When g (x, t)<l*L/2∧m(x,t)<r L/2, adopting foreground pixel background model mf(x),nf(x),df(x)]Otherwise, the current background model [ m ] is adoptedc(x),nc(x),dc(x)]Wherein the parameter l and the parameter r are fixed values。
Further, the parameter l is 0.8, and the parameter r is 0.1.
Furthermore, the target segmentation and tracking module sends a tracking and recognition result of the target and the motion information of the target to an information fusion decision module and flight information which are located in a remote monitoring center in a wireless mode to be matched, so that information identification of the field surveillance target is achieved, the remote monitoring center further comprises a field active target monitoring platform and a cloud server, and the field active target monitoring platform is connected with the cloud server.
The invention has the advantages that the invention can realize real-time identification and tracking of a plurality of moving targets in an airport scene, has higher robustness, can better influence of external clutter factors such as shadow and the like, has better precision, and adopts onboard and vehicle-mounted mobile display terminals to provide timely early warning and parking space distribution for airport service vehicles and airplanes in time, thereby avoiding the occurrence of collision accidents.
The following description of the preferred embodiments for carrying out the present invention will be made in detail with reference to the accompanying drawings so that the features and advantages of the present invention can be easily understood.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments of the present invention will be briefly described below. Wherein the drawings are only for purposes of illustrating some embodiments of the invention and are not to be construed as limiting the invention to all embodiments thereof.
FIG. 1 is a schematic view of the structure of the present invention.
FIG. 2 is a schematic diagram of the control steps of the present invention.
Fig. 3 is a schematic diagram of specific steps of target segmentation detection and identification tracking in the present invention.
Fig. 4 is a schematic diagram of a moving object detection process in this embodiment.
Fig. 5 is a schematic diagram illustrating a background updating process in this embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of specific embodiments of the present invention. Like reference symbols in the various drawings indicate like elements. It should be noted that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
The invention provides a detection method and a detection system for airport scene moving targets, which can realize real-time identification and tracking of a plurality of moving targets in an airport scene, provide timely early warning and parking space allocation for airport service vehicles and airplanes in time by adopting onboard and vehicle-mounted mobile display terminals and avoid the occurrence of collision accidents. As shown in fig. 1 to 5, the target tracking and segmenting system for airport complex scenes includes: the system comprises an auxiliary monitoring module, a target data receiving module, a target segmentation and tracking module, a mobile display terminal and an information fusion decision-making module, wherein the auxiliary monitoring module comprises a scene radar monitoring unit and a panoramic video monitoring unit, and is used for monitoring the scene activities of an airport in real time to obtain an airport panoramic image and scene target position information and sending the airport panoramic image and the scene target position information to the target data receiving module; the target data receiving module is used for carrying out rectification conversion processing on the airport panoramic image and target position information and carrying out position calibration on a manually defined airport range, and the target data receiving module sends the rectified, converted and calibrated image to the target segmentation and tracking module; as shown in fig. 2, the specific control steps of the present invention include: a. the target segmentation and tracking module carries out target segmentation detection, identification and tracking on the image which is corrected, converted and calibrated by the target data receiving module, and calculates the movement speed of the tracked target; the mobile display terminal comprises a data acquisition unit, a display unit and an emergency warning unit, wherein the data acquisition unit is used for acquiring scene target position information, distance warning information and parking space real-time information sent by the information fusion decision module, the display unit is used for graphically displaying the acquired information, the emergency warning unit is used for giving an alarm when the distance between surrounding targets and the local machine exceeds a safe distance and exceeds an overspeed, and the display unit and the emergency warning unit are both connected with the data acquisition unit. c. The target is cut apart and tracking module sends the tracking recognition result of target and the motion information of target to the information fusion decision-making module and the flight information that are located the remote monitoring center through wireless mode and match, realizes the information sign of field prison target, the remote monitoring center still includes scene activity target monitoring platform and high in the clouds server, scene activity target monitoring platform with the high in the clouds server is connected, scene activity target monitoring platform is used for airport integrated activity management and control including navigation guide module, flight information management system and personnel post management system etc. the high in the clouds server is used for storing relevant data.
The target data receiving module is used for carrying out deformity correction on the received panoramic image of the airport, reading the current streaming media of the target to be tracked by using FFmpeg, decompressing the streaming media to obtain AVFrame images of YUV three channels, converting the AVFrame images into Mat images, processing to obtain a digital image sequence and outputting the digital image sequence to the target segmentation and tracking module; as shown in fig. 3, the specific steps of the target segmentation detection and identification tracking include: s1, performing background segmentation and updating on the image, calculating a median lambda (x) and a standard deviation sigma (x) of each pixel intensity value in the video image within a certain time to construct a background, and S2, detecting a moving target in a subsequent sequence image by using the constructed background model and periodically updating the background model; s3, inputting image frames, and estimating the accurate position of the target by adopting a shadow detection and removal algorithm based on the normalized correlation coefficient to remove the image shadow to eliminate the condition that the shadow is combined with the target block; s4, finally, carrying out multi-target tracking by adopting an MHT algorithm; the construction background specifically includes: background model adoption at pixel x
Figure BDA0002411417860000071
Wherein each pixel x has 3 indexes including a minimum intensity value m (x), a maximum intensity value N (x), and a maximum intensity difference d (x) of consecutive frames, V is an array comprising N consecutive images, and V is a maximum intensity valuei(x) Represents the pixel value at the x position of the ith frame image, if Vi(x)-λ(x)|<2 σ (x), then Vi(x) Belonging to a stationary background pixel, i.e. Vz(x) Collecting, otherwise, judging as a moving object pixel; as shown in fig. 4, the moving object detection process is as follows: the detecting of the moving object in the subsequent sequence image specifically includes calculating a median d of maximum intensity differences d (x) of all pixels in the background modeltAccording to dtFor the image I to be inspectednPerforming threshold segmentation on the image to be detected InThe pixel at the middle position x satisfies In(x)-m(x)<k*dt∨In(x)-n(x)<k*dtThen classify as background, otherwise classify as foreground, whereinn(x) Setting L as background model updating period, calculating mean value and variance of front L/2 frame data, establishing model by using the mean value and the variance, filtering data of rear L/2 frame according to the model, calculating minimum intensity value, maximum intensity difference and mean value of continuous frames of rear L/2 frame image, dividing pixel into background times m (x, t) according to the times g (x, t) of dividing pixel into background in rear L/2 frame image, and classifying the pixel into foreground time h (x, t) to background [ m (x), n (x), d (x) last time]And (6) updating. Wherein, when g (x, t)>l L/2, background pixel background model [ m ] is adoptedb(x),nb(x),db(x)]When g (x, t)<l*L/2∧m(x,t)<r L/2, adopting foreground pixel background model mf(x),nf(x),df(x)]Otherwise, the current background model [ m ] is adoptedc(x),nc(x),dc(x)]Wherein the parameter l and the parameter r are fixed values, wherein the parameter l is 0.8, and the parameter r is 0.1.
In this embodiment, the sensing and monitoring unit is used for monitoring the movement position and track of the scene activity target, and includes a scene monitoring radar, a CNSS global navigation satellite system, ADS-B, and the like.
It should be noted that the described embodiments of the invention are only preferred ways of implementing the invention, and that all obvious modifications, which are within the scope of the invention, are all included in the present general inventive concept.

Claims (9)

1. A target tracking and segmenting system for complex airport scenes is characterized by comprising the following components: the system comprises an auxiliary monitoring module, a target data receiving module, a target segmentation and tracking module, a mobile display terminal and an information fusion decision module;
the auxiliary monitoring module comprises a scene radar monitoring unit and a panoramic video monitoring unit, and is used for monitoring the scene activities of the airport in real time to obtain an airport panoramic image and scene target position information and sending the airport panoramic image and the scene target position information to the target data receiving module;
the target data receiving module is used for carrying out rectification conversion processing on the airport panoramic image and target position information and carrying out position calibration on a manually defined airport range, and the target data receiving module sends the rectified, converted and calibrated image to the target segmentation and tracking module;
the target segmentation and tracking module carries out target segmentation detection, identification and tracking on the image which is corrected, converted and calibrated by the target data receiving module, calculates the movement speed of the tracked target, and sends the tracking and identification result of the target and the movement information of the target to the information fusion decision module;
the mobile display terminal comprises a data acquisition unit, a display unit and an emergency alarm unit, wherein the data acquisition unit is used for acquiring scene target position information, distance alarm information and parking space real-time information sent by the information fusion decision module.
2. The airport complex scene-oriented target tracking and segmenting system of claim 1, wherein the target data receiving module performs deformity correction on the received panoramic image of the airport, reads a current streaming media of a target to be tracked by using FFmpeg, decompresses the streaming media to obtain a YUV three-channel AVFrame image, converts the AVFrame image into a Mat image, processes the Mat image to obtain a digital image sequence, and outputs the digital image sequence to the target segmenting and tracking module.
3. The airport complex scene-oriented target tracking and segmentation system as claimed in claim 2, wherein the target segmentation detection and recognition tracking specifically comprises: performing background segmentation and updating on the image, calculating a median lambda (x) and a standard deviation sigma (x) of each pixel intensity value in the video image within a certain time to construct a background, detecting a moving target in a subsequent sequence image by using the constructed background model, and periodically updating the background model; inputting an image frame, and estimating the accurate position of a target by adopting a shadow detection and removal algorithm based on a normalized correlation coefficient to remove image shadows so as to eliminate the condition that the shadows are combined with target blocks; and finally, performing multi-target tracking by adopting an MHT algorithm.
4. The airport complex scene-oriented target tracking and segmentation system of claim 3, wherein the context construction specifically comprises: background model adoption at pixel x
Figure FDA0002411417850000021
Wherein each pixel x has 3 indexes including a minimum intensity value m (x), a maximum intensity value N (x), and a maximum intensity difference d (x) of consecutive frames, V is an array comprising N consecutive images, and V is a maximum intensity valuei(x) Represents the pixel value at the x position of the ith frame image, if Vi(x) - λ (x) | < 2 × σ (x), then Vi(x) Belonging to a stationary background pixel, i.e. Vz(x) And if not, determining to be the moving object pixel.
5. The airport complex scene-oriented target tracking and segmentation system as claimed in claim 4 wherein the detection of motion in subsequent sequence imagesThe moving target specifically comprises: calculating the median d of the maximum intensity differences d (x) over all pixels in the background modeltAccording to dtFor the image I to be inspectednPerforming threshold segmentation on the image to be detected InThe pixel at the middle position x satisfies In(x)-m(x)<k*dt∨In(x)-n(x)<k*dtThen classify as background, otherwise classify as foreground, whereinn(x) Representing the pixel value at time x at n, k being the segmentation threshold parameter.
6. The airport complex scene-oriented object tracking and segmentation system of claim 5, wherein the periodic updating of the background model comprises setting L a background model updating period, calculating a mean and a variance of the first L/2 frame data, modeling using the mean and the variance, filtering the data of the rear L/2 frame according to the model, calculating a minimum intensity value, a maximum intensity difference and a mean of the rear L/2 frame image, and updating the background [ m (x), n (x), d (x) ], according to the number of times g (x, t) that the pixel is classified as background in the rear L/2 frame image, the number of times m (x, t) that the pixel is classified as background m (x, t) and the time h (x, t) that the pixel was classified as foreground last time.
7. The airport complex scene-oriented object tracking and segmentation system as claimed in claim 6, wherein when g (x, t) > l L/2, a background pixel background model [ m ] is adoptedb(x),nb(x),db(x)]When g (x, t) < l L/2 ^ m (x, t) < r L/2, a foreground pixel background model [ mf(x),nf(x),df(x)]Otherwise, the current background model [ m ] is adoptedc(x),nc(x),dc(x)]Wherein, the parameter 1 and the parameter r are fixed values.
8. The airport complex scene-oriented object tracking and segmentation system as claimed in claim 7, wherein the parameter 1 is 0.8 and the parameter r is 0.1.
9. The airport complex scene-oriented target tracking and segmenting system of claim 1, wherein the target segmenting and tracking module wirelessly transmits tracking recognition results of targets and motion information of the targets to an information fusion decision module and flight information at a remote monitoring center for matching, so as to realize information identification of field surveillance targets, the remote monitoring center further comprises a field activity target monitoring platform and a cloud server, and the field activity target monitoring platform is connected with the cloud server.
CN202010177894.7A 2020-03-13 2020-03-13 Target tracking and segmenting system for complex scene of airport Pending CN111460920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010177894.7A CN111460920A (en) 2020-03-13 2020-03-13 Target tracking and segmenting system for complex scene of airport

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010177894.7A CN111460920A (en) 2020-03-13 2020-03-13 Target tracking and segmenting system for complex scene of airport

Publications (1)

Publication Number Publication Date
CN111460920A true CN111460920A (en) 2020-07-28

Family

ID=71680787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010177894.7A Pending CN111460920A (en) 2020-03-13 2020-03-13 Target tracking and segmenting system for complex scene of airport

Country Status (1)

Country Link
CN (1) CN111460920A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200909A (en) * 2020-09-24 2021-01-08 上海麦图信息科技有限公司 Integrated airport monitoring system that shows airport ground object integrated information
CN112860946A (en) * 2021-01-18 2021-05-28 四川弘和通讯有限公司 Method and system for converting video image information into geographic information
CN113138932A (en) * 2021-05-13 2021-07-20 北京字节跳动网络技术有限公司 Method, device and equipment for verifying gesture recognition result of algorithm library
CN113160250A (en) * 2021-04-23 2021-07-23 电子科技大学长三角研究院(衢州) Airport scene surveillance video target segmentation method based on ADS-B position prior

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201773466U (en) * 2009-09-09 2011-03-23 深圳辉锐天眼科技有限公司 Video monitoring and pre-warning device for detecting, tracking and identifying object detention/stealing event
CN102291574A (en) * 2011-08-31 2011-12-21 山东轻工业学院 Complicated scene target movement tracking system based on embedded technique and light transmission and monitoring method thereof
CN105678803A (en) * 2015-12-29 2016-06-15 南京理工大学 Video monitoring target detection method based on W4 algorithm and frame difference

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201773466U (en) * 2009-09-09 2011-03-23 深圳辉锐天眼科技有限公司 Video monitoring and pre-warning device for detecting, tracking and identifying object detention/stealing event
CN102291574A (en) * 2011-08-31 2011-12-21 山东轻工业学院 Complicated scene target movement tracking system based on embedded technique and light transmission and monitoring method thereof
CN105678803A (en) * 2015-12-29 2016-06-15 南京理工大学 Video monitoring target detection method based on W4 algorithm and frame difference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ISMAIL HARITAOGLU ET AL.: "W4: Real-Time Surveillance of People and Their Activities" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200909A (en) * 2020-09-24 2021-01-08 上海麦图信息科技有限公司 Integrated airport monitoring system that shows airport ground object integrated information
CN112860946A (en) * 2021-01-18 2021-05-28 四川弘和通讯有限公司 Method and system for converting video image information into geographic information
CN112860946B (en) * 2021-01-18 2023-04-07 四川弘和通讯集团有限公司 Method and system for converting video image information into geographic information
CN113160250A (en) * 2021-04-23 2021-07-23 电子科技大学长三角研究院(衢州) Airport scene surveillance video target segmentation method based on ADS-B position prior
CN113138932A (en) * 2021-05-13 2021-07-20 北京字节跳动网络技术有限公司 Method, device and equipment for verifying gesture recognition result of algorithm library

Similar Documents

Publication Publication Date Title
CN111460920A (en) Target tracking and segmenting system for complex scene of airport
CN111382768B (en) Multi-sensor data fusion method and device
US11380105B2 (en) Identification and classification of traffic conflicts
Bas et al. Automatic vehicle counting from video for traffic flow analysis
CN113706737B (en) Road surface inspection system and method based on automatic driving vehicle
CN111383480B (en) Method, apparatus, device and medium for hazard warning of vehicles
US11010602B2 (en) Method of verifying a triggered alert and alert verification processing apparatus
CN112700470A (en) Target detection and track extraction method based on traffic video stream
CN112382131B (en) Airport scene safety collision avoidance early warning system and method
EP2709066A1 (en) Concept for detecting a motion of a moving object
CN113593250A (en) Illegal parking detection system based on visual identification
CN111460938B (en) Vehicle driving behavior real-time monitoring method and device
CN113139482A (en) Method and device for detecting traffic abnormity
CN111462534B (en) Airport moving target detection system and method based on intelligent perception analysis
CN112861902A (en) Method and apparatus for determining a trajectory of a moving element
EP2709065A1 (en) Concept for counting moving objects passing a plurality of different areas within a region of interest
CN115755094A (en) Obstacle detection method, apparatus, device and storage medium
CN114581863A (en) Vehicle dangerous state identification method and system
KR102492290B1 (en) Drone image analysis system based on deep learning for traffic measurement
CN113256014B (en) Intelligent detection system for 5G communication engineering
CN117897737A (en) Unmanned aerial vehicle monitoring method and device, unmanned aerial vehicle and monitoring equipment
CN113744304A (en) Target detection tracking method and device
CN113989731A (en) Information detection method, computing device and storage medium
US20220366586A1 (en) Autonomous agent operation using histogram images
CN117590863B (en) Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination