CN111079525A - Image processing method, apparatus, system and storage medium - Google Patents

Image processing method, apparatus, system and storage medium Download PDF

Info

Publication number
CN111079525A
CN111079525A CN201911073091.0A CN201911073091A CN111079525A CN 111079525 A CN111079525 A CN 111079525A CN 201911073091 A CN201911073091 A CN 201911073091A CN 111079525 A CN111079525 A CN 111079525A
Authority
CN
China
Prior art keywords
target object
image
historical
target
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911073091.0A
Other languages
Chinese (zh)
Other versions
CN111079525B (en
Inventor
孟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201911073091.0A priority Critical patent/CN111079525B/en
Publication of CN111079525A publication Critical patent/CN111079525A/en
Application granted granted Critical
Publication of CN111079525B publication Critical patent/CN111079525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Testing Of Coins (AREA)

Abstract

The embodiment of the application provides an image processing method, equipment, a system and a storage medium. In the embodiment of the application, the historical movement track of the target object is determined according to a plurality of historical images which are collected before the image to be identified and contain the target object; and determining whether the image to be identified contains the target image or not according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and the subsequent checking of the condition of the target object is facilitated.

Description

Image processing method, apparatus, system and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing system, and a storage medium.
Background
In the existing transportation field, especially airports, railway stations, passenger stations, ports, docks and other places for coming and going flights, vehicles or ships to stop are often provided with a plurality of cameras, and videos shot by the cameras can provide reference for monitoring the health conditions of the coming and going flights, vehicles or ships.
Taking an airport as an example, when the airport performs airplane scheduling flow optimization or performs fault checking such as airplane delay, videos collected by cameras in the airport can be looked up. However, in practical applications, the airplane to be verified cannot be identified from the video shot by the camera.
Disclosure of Invention
Aspects of the present application provide an image processing method, device, system, and storage medium, which are used to identify a target object, and further facilitate subsequent verification of the condition of the target object.
An embodiment of the present application provides an image processing method, including:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images collected before the image to be identified; the plurality of historical images are images which are collected during the process that a target object moves in the designated area and contain the target object;
determining a historical movement track of the target object in a historical time period according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
An embodiment of the present application further provides an image processing method, including: acquiring an image to be identified, wherein the image to be identified is acquired by first image acquisition equipment arranged in a designated area;
acquiring a plurality of historical images which are acquired by at least one second image acquisition device when a target object moves in the designated area and contain the target object; the second image acquisition equipment is arranged in the designated area and is positioned in front of the first image acquisition equipment;
determining a historical movement track of the target object according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
An embodiment of the present application further provides a monitoring system, including: the system comprises a server device and a plurality of image acquisition devices arranged in a designated area;
the plurality of image acquisition devices are used for acquiring images in the designated area, and the images comprise moving objects appearing in the designated area;
the server device is configured to: acquiring an image to be identified, which is acquired by a target image acquisition device in a plurality of image acquisition devices; acquiring a plurality of historical images which are acquired before the image to be identified and contain the target object from the images acquired by the plurality of image acquisition devices; determining a historical movement track of the target object in a historical time period according to the plurality of historical images; and determining whether the target object is contained in the image to be recognized or not according to the historical movement track.
The embodiment of the present application further provides an airport monitoring system, which includes: the system comprises a server-side device and a plurality of cameras arranged in an airport;
the cameras are used for collecting images of all airplanes in the airport;
the server device is configured to: acquiring a plurality of historical images containing a target airplane before an image to be identified acquired by a target camera; determining the historical movement track of the target aircraft in a historical time period according to the plurality of historical images; determining whether the image to be identified contains the target airplane or not according to the historical movement track; the target camera is any one of the plurality of cameras other than the reference camera.
An embodiment of the present application further provides a computer device, including: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images collected before the image to be identified; the plurality of historical images are images which are collected during the process that a target object moves in the designated area and contain the target object;
determining a historical movement track of the target object in a historical time period according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
An embodiment of the present application further provides a computer device, including: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, wherein the image to be identified is acquired by first image acquisition equipment arranged in a designated area;
acquiring a plurality of historical images which are acquired by at least one second image acquisition device when a target object moves in the designated area and contain the target object; the second image acquisition equipment is arranged in the designated area and is positioned in front of the first image acquisition equipment;
determining a historical movement track of the target object according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track of the target object.
The present invention also provides a computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the image processing method.
In the embodiment of the application, the historical movement track of the target object is determined according to a plurality of historical images which are collected before the image to be identified and contain the target object; and determining whether the image to be identified contains the target image or not according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and the subsequent checking of the condition of the target object is facilitated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of an airport monitoring system according to an embodiment of the present disclosure;
fig. 1b is a schematic diagram of a method for determining a first location according to an embodiment of the present disclosure;
FIG. 1c is a schematic structural diagram of another airport monitoring system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3a is a schematic flowchart of another image processing method according to an embodiment of the present disclosure;
fig. 3b is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a monitoring system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another computer device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another computer device provided in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at the technical problem that target object identification cannot be carried out on a video shot by a camera in the existing transportation field, in some embodiments of the application, a historical moving track of a target object is determined according to a plurality of historical images which are collected before an image to be identified and contain the target object; and determining whether the image to be identified contains the target image or not according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and the subsequent checking of the condition of the target object is facilitated.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of an airport monitoring system according to an embodiment of the present application. As shown in fig. 1a, the system comprises: a server device 10a and a plurality of cameras 10b disposed within the airport. The structure of the airport, the installation positions and the number of the cameras in the airport, and the implementation form of the cameras are merely exemplary, and are not limited thereto. In practical use, as shown in fig. 1a, airports include terminal buildings, ferry vehicles (not shown in fig. 1 a), and the like.
In this embodiment, the server device 10a is a computer device capable of performing image processing, and generally has the capability of undertaking and securing services. The server device 10a may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. The server device 10a may also refer to other computing devices with corresponding service capabilities, such as a terminal device (running an image processing program) such as a computer. In the embodiment of the present application, the relative position relationship between the server device 10a and the airport is not limited, and the server device 10a may be disposed inside the airport or outside the airport.
In this embodiment, the server device 10a may perform online processing on the image to be recognized, or may perform offline processing. Alternatively, there may be a wireless connection between the server device 10a and each camera 10 b. Optionally, the server device 10a may be communicatively connected to the camera 10b through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Alternatively, the server device 10a may also be communicatively connected to each camera 10b through bluetooth, WiFi, infrared, or the like.
The plurality of cameras 10b can acquire images in the airport. In the present embodiment, the images of the airplanes captured by the cameras 10b are mainly processed, and therefore, the following description will focus on the processing procedure of the images of each airplane in the airport captured by the plurality of cameras 10 b.
In this embodiment, the plurality of cameras 10b may capture images of each aircraft within the airport, including images of each aircraft as it moves within the airport. For an aircraft to move within the airport from landing to landing on the tarmac, the path of movement is generally from the runway to the taxiway, and from the taxiway to the corresponding tarmac within the tarmac, as shown in phantom in figure 1 a. Of course, the aircraft also needs to move in the airport from the airport waiting to take off, and the moving route is generally from the airport to the taxiway, then from the taxiway to the runway to take off, and the like. The above is only an exemplary description of the moving route of the airplane in the airport, and does not describe that the airplane only passes through the runway, taxiway and apron when moving in the airport.
In practical applications, because the distance between the camera in the airport and the airplane moving in the airport is relatively long, and the moving speed of the airplane is relatively fast, the image of the airplane in the moving process acquired by the camera 10b cannot display the identification information of the airplane, in the prior art, the server device 10a cannot identify the concerned airplane from the image acquired by the camera 10 b. In the embodiments of the present application, for convenience of description and distinction, the airplane in question is defined as a target airplane.
In practical applications, if the target aircraft is in the landing and taxiing phase, the air traffic control department may notify the server device 10a of the identification information of the target aircraft, the landing time of the target aircraft, and the runway for landing. Wherein the identification information of the target aircraft comprises: the number of the target aircraft, the flight number, etc., but is not limited thereto. Based on this, the server device 10a obtains an image including the target aircraft from the image acquired by the camera deployed on the runway where the target aircraft lands, according to the landing time of the target aircraft and the image characteristics of the target aircraft. In this process, the server device 10a may also obtain identification information of the target aircraft.
In addition, before the target aircraft lands on the runway, the Automatic dependent surveillance-broadcast (ADS-B) device on the target aircraft may also send the identification Information of the target aircraft, the kinematic parameters of the target aircraft before landing on the runway, and Geographic Information System (GIS) Information of the target aircraft to the server device 10 a. Based on this, the kinematic parameters of the target aircraft before landing on the runway include: at least one of a moving speed, a traveling direction, and an acceleration of the target aircraft before landing on the runway. The server device 10a can calculate the landing time of the target airplane at the entrance of the runway according to the kinematic parameters of the target airplane before landing on the runway and the GIS information. Further, the server device 10a may obtain an image including the target aircraft from an image captured by a camera deployed at a runway threshold where the target aircraft lands, according to the time when the target aircraft lands at the runway threshold and the image characteristics of the target aircraft. In this process, the server device 10a may also obtain identification information of the target aircraft.
If the airplane is in the takeoff and taxi phase, the air traffic control department notifies the server device 10a of the identification information of the target airplane, the airplane position where the target airplane is parked at the parking lot, and the takeoff time of the target airplane. Based on this, the server device 10a may obtain the image including the target aircraft from the image captured by the camera which is set to capture the view field covering the stand where the target aircraft is parked on the parking apron according to the takeoff time of the target aircraft and the position of the stand where the target aircraft is parked on the parking apron.
However, the cameras except for the two more specific positions cannot identify whether the images acquired by the cameras contain the target airplane according to the above manner. In the embodiment of the application, a camera arranged at the runway threshold and a camera for collecting the airplane with the view covering the target airplane parked at the parking lot are collectively defined as the reference camera. If the airplane is in a landing and taxiing stage, the reference camera is a camera arranged at the entrance of the airport runway; and if the airplane is in the takeoff and taxiing stage, the reference camera is a camera for collecting a visual field covering the airplane position where the target airplane stops on the parking apron.
The following describes an exemplary method for the server device 10a to process the image to be recognized, taking the image to be recognized collected by any one of the cameras except the reference camera as an example. For convenience of description and distinction, a camera for acquiring an image to be recognized is defined as a target camera, that is, the target camera is any camera arranged in an airport except a reference camera.
In this embodiment, the server device 10a may acquire a plurality of history images including the target object acquired before the image to be recognized. In the present embodiment, the plurality of sheets means 2 sheets or 2 or more sheets. The plurality of historical images can be collected by the target camera, other cameras arranged in front of the target camera, or the plurality of historical images can be collected by the target camera and other cameras arranged in front of the target camera. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be recognized. Wherein, M is not less than 2 and is an integer, and the specific value can be flexibly set and is not limited herein. It should be noted that, in the embodiments of the present application, the other cameras in front of the target camera refer to the other cameras that the target aircraft passes through before passing through the target camera at this time. For example, if the target aircraft is in a landing and taxiing stage and the target aircraft moves, as shown by the dotted lines in fig. 1a, the cameras numbered 1 and 2 … 7 are sequentially passed through by the target aircraft, and the camera numbered 1 and 2 are in front of the camera numbered 3; if the target aircraft is in the takeoff and taxi phase, the target aircraft sequentially passes through cameras (not shown in fig. 1 a) numbered 7 and 6 … 1 in the moving process, and for the camera numbered 3, the camera in front of the camera is the camera numbered 4-7.
Further, the server device 10a may determine the historical movement track of the target aircraft in the historical time period according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. For example, the collection time period of the plurality of historical images is 13:00-13:05 in 7, 12 and 7 months in 2019, and the historical time period is 13:00-13:05 in 7, 12 and 12 months in 2019. Further, the server device 10a may also determine whether the image to be identified includes the target object according to the historical movement track of the target aircraft.
The airport monitoring system provided by the embodiment can determine the historical movement track of the target airplane according to a plurality of historical images which are acquired before the image to be identified and contain the target airplane; and determining whether the image to be identified contains the target airplane or not according to the historical movement track of the target airplane, so that the image containing the target airplane can be screened out, the identification of the target airplane is realized, and the subsequent checking of the condition of the target airplane is facilitated. For example, the driving condition of the target airplane in the airport can be verified based on the images including the target airplane collected by a plurality of cameras in the airport, and the like.
On the other hand, if the plurality of historical images are acquired by other cameras arranged in front of the target camera, or the plurality of historical images are acquired by both the target camera and other cameras arranged in front of the target camera, the cross-border recognition and tracking of the target aircraft can be realized by using the image processing method provided by the embodiment of the application.
In addition, the camera is the existing facility in the airport, and the image processing mode provided by the embodiment of the application does not need to additionally add image acquisition equipment, namely, the image acquisition cost does not need to be additionally input.
It should be noted that, in the embodiment of the present application, if the camera in front of the target camera is the reference camera, the above method may be adopted to obtain a plurality of historical images from the camera acquired by the reference camera. If the camera in front of the target camera is not the reference camera, and multiple historical images including the target aircraft are obtained from the images acquired by the cameras, whether the images acquired by the cameras include the target aircraft or not can be determined by using the method provided by the embodiment, and the multiple historical images including the target aircraft are determined. For example, for an image to be recognized acquired by a first camera behind a reference camera, whether the image to be recognized acquired by the camera contains a target airplane or not can be determined by using a plurality of historical images including the target airplane acquired by the reference camera, and a plurality of target images including a target object are acquired from the image acquired by the camera and are used for processing the image to be recognized acquired by a next camera; and so on. Or, according to the time stamp of the image to be recognized, acquiring a plurality of historical images containing the target airplane, wherein the time interval between the historical images and the acquisition time of the image to be recognized is the nearest. Accordingly, if the camera in front of the target camera is not the reference camera, the image processing method provided by the embodiment of the application can be used to determine whether the image acquired before the image to be identified contains the target airplane, and acquire a plurality of historical images containing the target airplane, which are closest to the acquisition time interval of the image to be identified, from the target image containing the target airplane.
In the embodiment of the application, the server device 10a may predict, according to the historical movement trajectory, a first position to which the target aircraft moves when acquiring the image to be identified; and determining whether the image to be identified contains the target object or not according to the image characteristics and the first position of the target object. For example, the server device 10a may utilize a Kalman (Kalman) algorithm to predict a first position to which the target aircraft moves when acquiring the image to be identified.
Further, the server device 10a may obtain, from the historical movement trajectory, a timestamp of the target aircraft passing through each historical track point and a position of each historical track point; and calculating the kinematic parameters of the target aircraft in the historical time period according to the timestamp of the target aircraft passing through each historical track point and the coordinates of each historical track point in the preset coordinate system. Wherein the kinematic parameters of the target aircraft in the historical time period comprise: at least one of moving speed, traveling direction and acceleration of the target aircraft in a historical time period; the preset coordinate system is a coordinate system where coordinates representing positions of historical track points are located, and the coordinate system can be a coordinate system established by any reference point and a reference plane, for example, the preset coordinate system can be a coordinate system established by taking the center of an airport as an origin, taking any two vertical lines on the ground as an x axis and a y axis, and taking a direction perpendicular to the ground as a z axis direction; alternatively, the preset coordinate system may be a world coordinate system, and the like, but is not limited thereto.
Further, the server device 10a may predict, according to the kinematic parameters of the target aircraft and the position of at least one of the historical track points, a first position to which the target aircraft moves when the image to be identified is acquired. Furthermore, the server device 10a may determine whether the image to be identified includes the target aircraft according to the image feature of the target aircraft and the coordinate of the first position in the preset coordinate system. Wherein the image features of the target aircraft include: color features, texture features, shape features, or spatial relationship features of the target aircraft, but are not limited thereto.
In some embodiments, multiple airplanes with similar image features may be included in the image to be identified. For example, as shown in fig. 1a, for a camera that captures a field of view covering an apron, the captured image may contain multiple airplanes, and the image characteristics of the airplanes may be very similar. Based on this, the server device 10a may identify at least one candidate airplane from the image to be identified according to the image feature of the target airplane; and converting the pixel coordinates of at least one candidate airplane in the image to be identified into the coordinates of the candidate airplanes under a preset coordinate system according to the homography matrix of the camera corresponding to the image to be identified. Further, the server device 10a may determine whether a target aircraft exists in the at least one candidate aircraft according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate aircraft in the preset coordinate system.
Further, the server device 10a may calculate a distance between the at least one candidate airplane and the first location according to the coordinates of the first location in the preset coordinate system and the coordinates of the at least one candidate airplane in the preset coordinate system. And if the distance which is less than or equal to the preset distance threshold value exists in the distance between the at least one candidate airplane and the first position, determining that the target airplane exists in the at least one candidate airplane. Correspondingly, if the distances between the at least one candidate airplane and the first position are all larger than the preset distance threshold value, it is determined that the target airplane does not exist in the at least one candidate airplane.
Further, in a case where the target aircraft exists in the at least one candidate aircraft, the server device 10a may select, as the target aircraft, a candidate aircraft having a smallest distance from the first position from the at least one candidate aircraft. For example, as shown in fig. 1B, assuming that the position indicated by the five-pointed star is the first position to which the predicted target aircraft moves when acquiring the image to be identified, the distance between the aircraft numbered B and the first position is smaller than the distances between the aircraft numbered a and C and the first position, and the aircraft numbered C is determined to be the target aircraft.
For the historical movement track of the target aircraft in the historical time period, in some embodiments, the server device 10a may convert the pixel coordinates of the target aircraft in the multiple historical images into the coordinates of each historical track point that the target aircraft passes through in the preset coordinate system according to the homography matrix of the camera corresponding to each of the multiple historical images, and use the coordinates of each historical track point in the preset coordinate system as the positions of the historical track points; further, the server device 10a takes the timestamps of the plurality of historical images as the timestamps of the target aircraft passing through the historical track points; furthermore, the historical movement track of the target aircraft can be generated according to the timestamp of the target aircraft passing through each historical track point and the position of each historical track point. The homography matrix of the camera is a conversion matrix between the pixel coordinates of the image acquired by the camera and a preset coordinate system. Further, the server device 10a may calculate a homography matrix corresponding to the camera according to the pose and the internal and external parameters of the camera. The pose of the camera comprises a position coordinate and an attitude angle of the camera under a preset coordinate system. Further, the position coordinate and the attitude angle of the camera in the preset coordinate system comprise the installation position and the installation height of the camera.
In practical application, because the monitoring fields of the cameras may not be coincident, especially for the case that the plurality of historical images are collected by both the target camera and other cameras arranged in front of the target camera, the historical movement track may be discontinuous, and a large error may exist between the first position predicted subsequently and the position to which the target aircraft actually moves. Based on this, the server device 10a may obtain GIS track information of the target aircraft in a historical time period; and correcting the historical movement track of the target aircraft in the historical time period by using GIS track information. Then, the server device 10a determines whether the image to be recognized includes the target aircraft according to the corrected historical movement trajectory. Further, since the GIS trajectory information is determined based on the world coordinate system, the predetermined coordinate system may be the world coordinate system in order to reduce the number of times of coordinate conversion.
Further, for some airports, as shown in FIG. 1c, a scene monitoring radar 10c may be located above the ground. The scene monitoring radar 10c may monitor aircraft and vehicle activity within the airport and provide GIS trajectory information for the aircraft and vehicles. Based on this, in some embodiments, the airport monitoring system also includes a scene monitoring radar 10 c. The scene monitoring radar 10c sends the detected GIS track information of the target aircraft to the server device 10 a. Correspondingly, the server device 10c may obtain the GIS trajectory information of the target aircraft in the historical time period from the timestamp of the GIS trajectory information of the target aircraft sent by the scene monitoring radar 10c, and modify the historical movement trajectory information of the target aircraft in the historical time period by using the GIS trajectory information of the target aircraft in the historical time period.
For some aircraft, ADS-B equipment (not shown in FIGS. 1 a-1 c) may be installed thereon. The ADS-B equipment can automatically acquire information such as GIS track information, altitude, speed, course, identification information and the like of the airplane and broadcast the information to other airplanes or ground stations. In this embodiment, the server device 10a may be a device in the ground station that receives information broadcast by ADS-B devices on the airplane; or other computer devices in communication with devices in the ground station that receive information broadcast by ADS-B devices. No matter what kind of computer equipment the server side equipment 10a is, the GIS track information of the airplane sent by the ADS-B equipment on the airplane can be obtained. Based on this, in some embodiments, the server device 10a may obtain, according to a timestamp of the GIS track information of the target aircraft sent by the ADS-B device on the target aircraft, the GIS track information of the target aircraft in the historical time period, and modify, by using the GIS track information of the target aircraft in the historical time period, the historical movement track information of the target aircraft in the historical time period.
In other embodiments, in order to improve the accuracy of the generated historical movement track of the target aircraft in the historical time period, the server device 10a may further combine the GIS track information of the target aircraft in the historical time period, which is sent by the ADS-B device on the target aircraft, with the GIS track information of the target aircraft in the historical time period, which is detected by the scene monitoring radar 10c, and correct the historical movement track information of the target aircraft in the historical time period.
In the embodiment of the application, the scene monitoring radar 10c and the camera 10B are existing facilities in an airport, and the ADS-B device is an existing device on an airplane, so that the historical moving track information of the target airplane in a historical time period is corrected without additionally arranging a positioning device, that is, without additionally investing positioning cost.
On the other hand, because the operation habits of the captchas on the airplane are different, the ADS-B equipment may be in a closed state in the landing or taking-off process, so that the ADS-B equipment cannot acquire GIS information of the ground airplane; although the ground plane can also be positioned by the scene monitoring radar of the airport, sometimes the signal of the scene monitoring radar is shielded, so that the positioning of the scene monitoring radar is inaccurate. Therefore, in this embodiment, when determining the historical movement track information of the target aircraft in the historical time period, the GIS information of the target aircraft, which is acquired by the ADS-B device and/or the scene monitoring radar, is fused with the image acquired by the camera, so as to determine the historical movement track of the target aircraft in the historical time period, which is beneficial to improving the accuracy of the determined historical movement track of the target aircraft in the historical time period.
In this embodiment, for an image including a target aircraft acquired by a reference camera, the server device 10a may obtain identification information of the target aircraft from an air traffic control department and/or ADS-B devices on the target aircraft. Then, the server device 10a determines images including the target aircraft from the images acquired by the cameras behind the reference camera in sequence according to the images including the target aircraft acquired by the reference camera, and further acquires the images including the target aircraft acquired by the plurality of cameras 10b in the airport. Further, the server device 10a may add the identification information of the target aircraft as an add-on tag to an area where the target aircraft is located in the image including the target aircraft collected by the plurality of cameras 10 b; and generating a video abstract of the target airplane according to the image with the external label. Therefore, when the video corresponding to the target airplane needs to be checked, the video abstract of the target airplane can be retrieved only by inputting the identification information of the target airplane. Further, airport management personnel can optimize the airplane scheduling process according to the video abstract of the target airplane; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when the video abstract of the target aircraft is generated, the video abstract of the target aircraft may be generated according to the time stamps of the images including the target object, which are acquired by the cameras, and according to the time sequence.
Further, the server device 10a may convert the pixel coordinates of the target aircraft in the image to be identified into coordinates of the target aircraft in a preset coordinate system according to the homography matrix of the camera acquiring the image to be identified, and use the coordinates as the positions of the track points corresponding to the target aircraft; and the time stamp of the image to be identified is taken as the time stamp of the time when the target aircraft passes the position. By the same method, the server device 10a can acquire each track point of the target aircraft in the airport; and further obtaining a moving track time sequence of the target airplane in the airport. Further, the server device 10a may splice the moving track of the target aircraft in the airport and the background image into frames, and combine the frames into a video, thereby obtaining a video summary of the target aircraft in the airport.
In this embodiment, the server device 10a may further obtain the movement tracks of other airplanes in the current airport, plan the navigation path of the target airplane according to the movement tracks of other airplanes in the current airport, and guide the target airplane to move along the planned navigation path. Thus, the situation that the airplane in the airport collides and the like in the moving process can be prevented. The movement tracks of other airplanes in the current airport can also be determined by adopting the manner of determining the movement track of the target airplane in the above embodiment.
In addition to the airport monitoring system embodiment provided by the present application, the present application embodiment also provides an image processing method, and the image processing method provided by the present application embodiment is exemplarily described below from the perspective of a server device.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application. The method is suitable for the server-side equipment. As shown in fig. 2, the method includes:
201. and acquiring an image to be identified acquired by target image acquisition equipment in the designated area.
202. Acquiring a plurality of historical images collected before an image to be identified; the plurality of historical images are images which are collected in the process that the target object moves in the designated area and contain the target object.
203. And determining the historical movement track of the target object in the historical time period according to the plurality of historical images.
204. And determining whether the image to be recognized contains the target object or not according to the historical movement track.
In the present embodiment, the designated area may be any physical location where a plurality of image capturing devices are deployed to capture an image of a moving object. For example, the designated area may be a train station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image acquisition device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto.
In the present embodiment, a plurality of image capturing apparatuses may capture images within a specified area. In the present embodiment, the processing is mainly performed on the images of the moving objects appearing in the designated area acquired by the image acquisition devices, and therefore, the following description will focus on the processing procedure of the images of the moving objects in the designated area acquired by the plurality of image acquisition devices.
In the present embodiment, the plurality of image capturing devices may capture images of each moving object within the specified area, including images of each moving object as it moves within the specified area. In practical applications, because the distance between the image capturing device in the designated area and the moving object when moving in the designated area is relatively long, and the moving speed of the moving object is relatively fast, the image of the moving object captured by the image capturing device in the moving process cannot display the identification information of the moving object, and therefore, in the prior art, the moving object of interest cannot be identified from the image captured by the image capturing device. In the embodiment of the present application, for convenience of description and distinction, a moving object of interest is defined as a target object. The application scenes are different, the target objects are different, and the identification information of the target objects is different. For example, in the above airport application scenario, the target object is a target airplane; the identification information of the target object may be, but is not limited to, the number of the target airplane, the flight number, and the like. For the application scenes of passenger stations, bus stations and parking lots, the target object is a target vehicle, and the identification information of the target object is the license plate number and the like of the vehicle. For the application scenarios of the dock and the port, the target object is a target ship, and the identification information thereof may be a ship identification number, but is not limited thereto.
In practical applications, if the target object enters the designated area, the designated area management department will notify the server device of the identification information, the entry time, and the entry of the target object. For example, the management of the passenger station may notify the service device of the license plate number, the time of arrival, and the entrance of the destination vehicle. Based on this, an image including the target object can be acquired from an image captured by an image capturing device disposed at the entrance of the target object, according to the entrance time of the target object and the image characteristics of the target object. In this process, identification information of the target object may also be acquired.
In addition, in some embodiments, before the target object enters the designated area, the ADS-B device on the target object may also send, to the server device, identification information of the target object and kinematic parameters of the target object before the target object enters the designated area, as well as GIS information of the target object. The kinematic parameters of the target object before entering the designated area comprise: at least one of a moving speed, a traveling direction, and an acceleration of the target object before entering the designated area. Based on the above, the time for the target object to enter the designated area can be calculated according to the kinematic parameters and the GIS information of the target object before the target object enters the designated area. Further, an image containing the target object may be acquired from an image acquired by an image acquisition device disposed at an entrance of the specified area, according to the time when the target object enters the specified area and the image characteristics of the target object. In this process, identification information of the target object may also be acquired.
In some application scenarios, the designated area portal is provided with an object recognition device. When the mobile object enters the entrance of the designated area, the object recognition means may acquire the entry time of the mobile object and the identification information of the mobile object. Correspondingly, when the target object enters the entrance of the designated area, the object recognition device can acquire the identification information of the target object and the target object entering time, and send the identification information of the target object and the target object entering time to the server device. Further, the server device may obtain an image including the target object from an image acquired by the image acquisition device deployed at the entrance of the designated area according to the entry time of the target object and the image characteristics of the target object. For example, an entrance of a passenger station is provided with a vehicle recognition device. When the target vehicle enters the entrance of the designated area, the vehicle identification device can acquire the license plate number of the target vehicle and the arrival time of the target vehicle, and send the license plate number of the target object and the arrival time of the target object to the server device.
Similarly, when the target object exits the designated area, the designated area management department may also provide the identification information of the target object, the parked position, and the start time of the target object exiting the designated area to the server device. Based on the method, the server-side equipment can acquire the image containing the target object from the image acquired by the parking position of the target airplane covered by the acquisition view field according to the starting time of the target object exiting from the designated area and the parking position of the target object.
However, other image capturing devices than the two more specific positions cannot recognize whether the target object is included in the images captured by these image capturing devices according to the above-mentioned manner. In the embodiment of the present application, an image capturing device disposed at an entrance of a specified area and an image capturing device capturing a parking position where a field of view covers a target object are collectively defined as a reference image capturing device. If the target object is in a stage of entering the designated area, the reference image acquisition equipment is image acquisition equipment arranged at an entrance of the designated area; and if the target object is in a stage of exiting the designated area, the reference image acquisition equipment is image acquisition equipment with an acquisition view covering the stop position of the target object.
The following describes an exemplary method for processing an image to be recognized, taking an image to be recognized, which is captured by any one of other image capturing apparatuses except the reference image capturing apparatus, as an example. For convenience of description and distinction, an image capturing apparatus that captures an image to be recognized is defined as a target image capturing apparatus, that is, the target image capturing apparatus is any image capturing apparatus disposed in a specified area except for a reference image capturing apparatus.
In step 201, an image to be recognized acquired by a target image acquisition device is acquired first. Optionally, the target image acquisition device may send the image to be identified to the server device on line; or the server side equipment reads the image to be identified from the storage medium of the target image acquisition equipment. Next, a plurality of history images including the target object acquired before the image to be recognized are acquired in step 202. The plurality of historical images can be acquired by the target image acquisition device, or acquired by at least one other image acquisition device which is arranged in the designated area and is positioned in front of the target image acquisition device, or acquired by both the target image acquisition device and at least one other image acquisition device which is arranged in front of the target image acquisition device. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be recognized. Wherein, M is not less than 2 and is an integer, and the specific value can be flexibly set and is not limited herein. The other image capturing apparatuses in front of the target image capturing apparatus refer to other image capturing apparatuses through which the target object passes before passing through the target image capturing apparatus this time.
Further, in step 203, a historical movement trajectory of the target object in the historical time period may be determined according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. Further, in step 204, it may be determined whether the target object is included in the image to be recognized according to the historical movement track of the target object.
In the embodiment, a historical moving track of the target object is determined according to a plurality of historical images which are collected before the image to be identified and contain the target object; and determining whether the image to be identified contains the target object or not according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and the subsequent checking of the condition of the target object is facilitated. For example, the driving condition of the target object in the airport may be verified based on the images including the target object acquired by the plurality of image acquisition devices in the airport, and the like.
On the other hand, if the plurality of historical images are acquired by other image acquisition devices arranged in front of the target image acquisition device, or the plurality of historical images are acquired by both the target image acquisition device and other image acquisition devices arranged in front of the target image acquisition device, cross-border recognition and tracking of the target object can be realized by using the image processing method provided by the embodiment of the application.
In addition, the image acquisition equipment is the existing facility in the designated area, and the image processing method provided by the embodiment of the application does not need to additionally add the image acquisition equipment, namely, the image acquisition cost is not required to be additionally input. It should be noted that, in the embodiment of the present application, if the image capturing device in front of the target image capturing device is the reference image capturing device, the above method may be adopted to obtain a plurality of history images from the image capturing device captured by the reference image capturing device. If the image capturing device in front of the target image capturing device is not the reference image capturing device, and a plurality of historical images including the target object are acquired from the images captured by the image capturing devices, it is possible to determine whether the images captured by the image capturing devices include the target object or not by using the method provided by the present embodiment, and determine the plurality of historical images including the target object from the images. For example, for an image to be recognized acquired by a first image acquisition device behind a reference image acquisition device, whether the image to be recognized acquired by the image acquisition device contains a target object or not can be determined by using a plurality of historical images containing the target object acquired by the reference image acquisition device, and a plurality of target images containing the target object are acquired from the image acquired by the image acquisition device and are used for processing the image to be recognized acquired by a next image acquisition device; and so on. Or, according to the time stamp of the image to be recognized, acquiring a plurality of historical images containing the target object, wherein the time interval between the historical images and the acquisition time of the image to be recognized is the nearest. Accordingly, if the image capturing device in front of the target image capturing device is not the reference image capturing device, the image processing method provided by the embodiment of the present application may be used to determine whether the image captured before the image to be recognized includes the target object, and acquire a plurality of historical images including the target object, which are closest to the capturing time interval of the image to be recognized, from the target image including the target object.
In the embodiment of the present application, an optional implementation manner of step 204 is: predicting a first position to which a target object moves when an image to be identified is collected according to a historical movement track; and determining whether the image to be identified contains the target object or not according to the image characteristics and the first position of the target object. For example, a first position to which the target object moves when acquiring the image to be recognized may be predicted using a Kalman algorithm.
Further, the timestamp of the target object passing through each historical track point and the position of each historical track point can be obtained from the historical moving track; and calculating the kinematic parameters of the target object in the historical time period according to the timestamp of the target object passing through each historical track point and the coordinates of each historical track point in the preset coordinate system. Wherein the kinematic parameters of the target object in the historical time period comprise: at least one of moving speed, traveling direction and acceleration of the target object in the historical time period; for the description of the preset coordinate system, reference may be made to the related contents of the above embodiments, and details are not repeated herein.
Furthermore, the first position to which the target object moves when the image to be recognized is acquired can be predicted according to the kinematic parameters of the target object and the position of at least one historical track point in the historical track points. Furthermore, whether the target object is included in the image to be recognized or not can be determined according to the image characteristics of the target object and the coordinates of the first position in the preset coordinate system. Wherein the image characteristics of the target object include: color features, texture features, shape features, or spatial relationship features of the target object, etc., but are not limited thereto.
In some embodiments, other moving objects in the image to be identified may be similar to the image features of the target object. Based on the method, at least one candidate object can be identified from the image to be identified according to the image characteristics of the target object; and converting the pixel coordinates of at least one candidate object in the image to be identified into the coordinates of the candidate objects under a preset coordinate system according to the homography matrix of the image acquisition equipment corresponding to the image to be identified. Further, whether the target object exists in the at least one candidate object may be determined according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system.
Further, a distance between the at least one candidate object and the first position may be calculated according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system. And if the distance between the at least one candidate object and the first position is less than or equal to the preset distance threshold, determining that the target object exists in the at least one candidate object. Correspondingly, if the distance between the at least one candidate object and the first position is larger than the preset distance threshold, it is determined that the target object does not exist in the at least one candidate object.
Further, in a case where the target object exists in the at least one candidate object, a candidate object having a smallest distance from the first position may be selected as the target object from the at least one candidate object.
For the historical movement track of the target object in the historical time period, in some embodiments, the pixel coordinates of the target in the multiple historical images can be converted into the coordinates of the historical track points passed by the target object in the preset coordinate system according to the homography matrix of the image acquisition device corresponding to the multiple historical images, and the coordinates of the historical track points in the preset coordinate system are used as the positions of the historical track points; further, the time stamps of a plurality of historical images are used as the time stamps of the target object passing through each historical track point; furthermore, the historical movement track of the target object can be generated according to the timestamp of the target object passing through each historical track point and the position of each historical track point. The homography matrix of the image acquisition equipment is a conversion matrix between pixel coordinates of an image acquired by the image acquisition equipment and a preset coordinate system. Further, a homography matrix corresponding to the image acquisition equipment can be calculated according to the pose and the internal and external parameters of the image acquisition equipment. The pose of the image acquisition equipment comprises a position coordinate and a posture angle of the image acquisition equipment under a preset coordinate system. Further, the position coordinates and the attitude angles of the image capturing device in the preset coordinate system include the installation position and the installation height of the image capturing device.
In practical applications, since the monitoring fields of the image capturing devices may not be coincident, especially for the case where the plurality of historical images are captured by both the target image capturing device and another image capturing device arranged in front of the target image capturing device, the historical movement trajectory may be discontinuous, which may cause a large error between the first position predicted subsequently and the position to which the target object actually moves. Based on the method, GIS track information of the target object in the historical time period can be acquired; and correcting the historical movement track of the target object in the historical time period by using GIS track information. And then, determining whether the image to be recognized contains the target object or not according to the corrected historical movement track. Further, since the GIS trajectory information is determined based on the world coordinate system, the predetermined coordinate system may be the world coordinate system in order to reduce the number of times of coordinate conversion.
Further, for some designated areas, a scene monitoring radar may be located above the ground. The scene monitoring radar can monitor the activity condition of the moving object in the designated area and provide GIS track information of the moving object. Based on the above, in some embodiments, the GIS trajectory information of the target object in the historical time period may be obtained according to the timestamp of the GIS trajectory information of the target object sent by the scene monitoring radar, and the GIS trajectory information of the target object in the historical time period may be used to correct the historical movement trajectory information of the target object in the historical time period.
For some moving objects, ADS-B devices may be installed thereon. The ADS-B equipment can automatically acquire GIS track information, height, speed, course, identification information and other information of the object and broadcast the information to other objects or ground stations. Based on this, in some embodiments, the GIS trajectory information of the target object in the historical time period may be obtained according to the time stamp of the GIS trajectory information of the target object sent by the ADS-B device on the target object, and the GIS trajectory information of the target object in the historical time period may be used to correct the historical movement trajectory information of the target object in the historical time period.
In other embodiments, in order to improve the accuracy of the generated historical movement track of the target object in the historical time period, the GIS track information of the target object in the historical time period, which is sent by the ADS-B device on the target object, may be combined with the GIS track information of the target object in the historical time period, which is detected by the scene monitoring radar, so as to correct the historical movement track information of the target object in the historical time period.
In the embodiment of the application, the scene monitoring radar and the camera are existing facilities in a specified area, and the ADS-B device is an existing device on the target object, so that the historical movement track information of the target object in a historical time period is corrected without additionally arranging positioning equipment, namely, without additionally investing positioning cost.
On the other hand, because the operation habits of operators of the target object are different, the ADS-B device may be in a closed state in the designated area, so that the ADS-B device cannot acquire the GIS information of the target object in the designated area; although the scene monitoring radar in the designated area can also position the target object in the designated area, sometimes the signal of the scene monitoring radar is blocked, which causes inaccurate positioning of the scene monitoring radar. Therefore, in this embodiment, when determining the historical movement track information of the target object in the historical time period, the GIS information of the target object acquired by the ADS-B device and/or the scene monitoring radar is fused with the image acquired by the image acquisition device, so as to improve the accuracy of the determined historical movement track of the target object in the historical time period. In the embodiment of the application, for an image containing a target object and acquired by a reference image acquisition device, a server device may acquire identification information of the target object from a designated area management department and/or ADS-B devices on the target object. And then, according to the image which is acquired by the reference image acquisition equipment and contains the target object, determining the image which contains the target object from the images acquired by the image acquisition equipment behind the reference image acquisition equipment in sequence, and further acquiring the images which are acquired by the plurality of image acquisition equipment in the designated area and contain the target object. Furthermore, the identification information of the target object can be used as a plug-in label and added to the area where the target object is located in the image which is acquired by the plurality of image acquisition devices and contains the target object; and generating a video abstract of the target object according to the image with the external label. Therefore, when the video corresponding to the target object needs to be viewed, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel can optimize an airplane scheduling process according to a video abstract of a target airplane; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when the video abstract of the target object is generated, the video abstract of the target object may be generated according to the time stamps of the images including the target object, which are acquired by the image acquisition devices, and according to the chronological order.
In the embodiment of the present application, the designated area may be an airport. Accordingly, the target object is then the target aircraft. Optionally, the movement tracks of other airplanes in the current airport can be obtained, the navigation path of the target airplane is planned according to the movement tracks of other airplanes in the current airport, and the target airplane is guided to move along the planned navigation path. Thus, the situation that the airplane in the airport collides and the like in the moving process can be prevented. The movement tracks of other airplanes in the current airport can also be determined by adopting the manner of determining the movement track of the target airplane in the above embodiment.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps in the image processing method.
Fig. 3a is a schematic flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
301. acquiring an image to be recognized, wherein the image to be recognized is acquired by a first image acquisition device arranged in a specified area.
302. Acquiring a plurality of historical images which are acquired by at least one second image acquisition device when the target object moves in the designated area and contain the target object; the second image acquisition device is arranged on the designated area and is positioned in front of the first image acquisition device.
303. And determining the historical movement track of the target object according to the plurality of historical images.
304. And determining whether the image to be recognized contains the target object or not according to the historical movement track.
In the present embodiment, the designated area may be any physical location where a plurality of image capturing devices are deployed to capture an image of a moving object. For example, the designated area may be a train station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image acquisition device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto. For obtaining an image including a target object and a description of a reference image capturing device from an image captured by the reference image capturing device in a designated area, reference may be made to the related contents of the above embodiments.
The present embodiment focuses on the description of the processing method of the image to be recognized acquired by the image acquisition devices other than the reference image acquisition device in the designated area. For convenience of description and distinction, the image capturing apparatus that captures the image to be recognized is defined as the first image capturing apparatus, i.e., the first image capturing apparatus is any image capturing apparatus disposed within the specified area except for the reference image capturing apparatus.
In step 301, an image to be recognized acquired by a first image acquisition device is first acquired. Optionally, the first image acquisition device may send the image to be identified to the server device on line; or the server side equipment reads the image to be identified from the storage medium of the first image acquisition equipment. Next, in step 302, from the images acquired by the at least one second image acquisition device, a plurality of history images containing the target object, which they acquired before the image to be recognized, are acquired. The second image acquisition devices are other image acquisition devices which are arranged in the designated area and positioned in front of the target image acquisition device, and the number of the second image acquisition devices can be 1 or more. In the present embodiment, a plurality means 2 or more. Preferably, the second image capturing device is an image capturing device located before and adjacent to the first image capturing device. For example, the second image capturing device is the first image capturing device located in front of the first image capturing device, and so on.
Further, the plurality of historical images are M historical images which are positioned in front of the first image acquisition device and are closest to the time interval for acquiring the image to be identified in the images acquired by the image acquisition device adjacent to the first image acquisition device. Wherein M is not less than 2 and is an integer. For the explanation of the image capturing device in front of the first image capturing device, reference may be made to the relevant contents of the above embodiments, which are not described herein again.
Further, in step 303, a historical movement trajectory of the target object in the historical time period may be determined according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. Further, in step 304, it may be determined whether the target object is included in the image to be recognized according to the historical movement track of the target object.
In the embodiment, the historical movement track of the target object is determined according to a plurality of historical images which are provided by other image acquisition equipment and contain the target object, wherein the images are arranged in front of the image acquisition equipment corresponding to the image to be identified; and whether the image to be identified contains the target object is determined according to the historical movement track of the target object, so that the image containing the target object can be screened out, cross-border identification and tracking of the target object are realized, and subsequent checking of the condition of the target object is facilitated. For example, the driving condition of the target object in the airport may be verified based on the images including the target object acquired by the plurality of image acquisition devices in the airport, and the like.
If the second image capturing device is a reference image capturing device disposed at the entrance of the designated area, an optional implementation manner of step 302 is: calculating the time of the target object entering the entrance of the designated area according to the kinematic parameters of the target object before the target object enters the entrance of the designated area, which are sent by the ADS-B equipment on the target object; and acquiring a plurality of target images from the images acquired by the second image acquisition equipment according to the time when the target object enters the entrance of the designated area. For a description of a specific implementation of step 302, reference may be made to relevant contents of the foregoing embodiments, and details are not described herein again.
It should be noted that, for the specific implementation of steps 303 and 304, reference may be made to the relevant contents of the foregoing embodiments, and details are not described herein again.
In the embodiment of the application, for an image containing a target object and acquired by a reference image acquisition device, a server device may acquire identification information of the target object from a designated area management department and/or ADS-B devices on the target object. And then, according to the image which is acquired by the reference image acquisition equipment and contains the target object, determining the image which contains the target object from the images acquired by the image acquisition equipment behind the reference image acquisition equipment in sequence, and further acquiring the images which are acquired by the plurality of image acquisition equipment in the designated area and contain the target object. Furthermore, the identification information of the target object can be used as a plug-in label and added to the area where the target object is located in the image which is acquired by the plurality of image acquisition devices and contains the target object; and generating a video abstract of the target object according to the image with the external label. Therefore, when the video corresponding to the target object needs to be viewed, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel can optimize an airplane scheduling process according to a video abstract of a target airplane; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when the video abstract of the target object is generated, the video abstract of the target object may be generated according to the time stamps of the images including the target object, which are acquired by the image acquisition devices, and according to the chronological order.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps in the image processing method.
Fig. 3b is a schematic flowchart of another image acquisition and processing method according to an embodiment of the present application. As shown in fig. 3b, the method comprises:
s301, acquiring an image to be identified, which is acquired by target image acquisition equipment in the designated area.
S302, obtaining the historical movement track of the target object before the image to be recognized.
S303, determining whether the image to be recognized contains the target object or not according to the historical movement track.
S304, under the condition that the image to be recognized contains the target object, generating a video abstract of the target object in the designated area according to the image to be recognized and other images containing the target object.
In this embodiment, the descriptions of steps S301 to S303 can refer to the relevant contents of the above embodiments, and are not repeated herein.
Further, in the present embodiment, a video summary of the target object in the designated area may be generated using the image containing the target object. If the image to be recognized contains the target object, the video abstract of the target object in the designated area can be generated according to the image to be recognized and other images containing the target object. Correspondingly, if the image to be identified does not contain the target object, the video abstract of the target object in the designated area is generated according to other images containing the target object. The other image containing the target object refers to an image containing the target object, which is acquired by the image acquisition device in the designated area. Thus, subsequent actions can be performed according to the video abstract. For example, the manager in the designated area can optimize the scheduling process of the mobile object in the designated area according to the video abstract of the target object and the video abstract of other objects in the designated area. For another example, the status of the target object in the designated area can be visually checked according to the video abstract of the target object, so that a basis is provided for fault check and the like of the target object.
Optionally, the identification information of the target image may be added to the target image as an add-on tag, where the target image is an image that includes the target object and is acquired by each image acquisition device in the designated area. Optionally, the add-on label may be added to the area of the target object in the target image.
Further, a video abstract of the target object in the designated area can be generated according to the target image with the external label. Therefore, when the video corresponding to the target object needs to be viewed, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel can optimize an airplane scheduling process according to a video abstract of a target airplane; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft. If the image to be recognized contains the target object, the target image contains the image to be recognized; if the image to be recognized does not contain the target object, the target image also contains the image to be recognized.
Optionally, when the video abstract of the target object is generated, the video abstract of the target object may be generated according to the time stamps of the images including the target object, which are acquired by the image acquisition devices, and according to the chronological order. For a specific implementation of generating the video summary of the target object, reference may be made to the relevant contents in the foregoing embodiments, or the video summary may be generated by using the prior art in the field, and details are not described here.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps in the image processing method.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 201 and 202 may be device a; for another example, the execution subject of step 201 may be device a, and the execution subject of step 202 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
The image processing method provided by the embodiment of the application is not only suitable for the airport scene embodiment, but also suitable for any other scenes for acquiring images of moving objects in the moving process, such as railway stations, passenger stations, ports, docks and the like, and the image acquisition equipment in the sites is utilized to monitor the vehicles or ships. Based on this, the embodiment of the present application further provides a monitoring system, which is used for monitoring the mobile objects appearing in the designated area. Wherein the designated area may be any physical location where a plurality of image capturing devices are deployed to capture an image of a moving object. For example, the designated area may be a train station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image acquisition device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto. The following provides an exemplary illustration of a monitoring system suitable for any physical location where multiple image capture devices are deployed to capture images of a moving object.
Fig. 4 is a schematic structural diagram of a monitoring system according to an embodiment of the present application. As shown in fig. 4, the system includes: a server device 40a and a plurality of image capturing devices 40b disposed within the airport. The structure of the designated area, the setting position and number of the image capturing devices in the designated area, and the implementation form of the image capturing devices are only exemplary, and are not limited thereto. The implementation forms of the server device 40a and the image capturing device 40b and the communication mode between the two devices can refer to the relevant contents of the airport monitoring system, and are not described herein again.
In the present embodiment, a plurality of image pickup devices 40b can pick up images within a specified area. In the present embodiment, the processing is mainly performed on the images of the moving objects appearing in the designated area acquired by the image acquisition devices 40b, and therefore, the following description will focus on the processing procedure of the images of the moving objects in the designated area acquired by the plurality of image acquisition devices 40 b.
In the present embodiment, the plurality of image capturing devices 40b may capture images of each moving object within the specified area, including images when each moving object moves within the specified area. In practical applications, since the distance between the image capturing device in the designated area and the moving object when moving in the designated area is relatively long, and the moving speed of the moving object is relatively fast, the image of the moving object captured by the image capturing device 40b during moving cannot display the identification information of the moving object, in the prior art, the server device 40a cannot identify the moving object of interest from the image captured by the image capturing device 40 b. In the embodiment of the present application, for convenience of description and distinction, a moving object of interest is defined as a target object. The application scenes are different, the target objects are different, and the identification information of the target objects is different. For example, in the above airport application scenario, the target object is a target airplane; the identification information of the target object may be, but is not limited to, the number of the target airplane, the flight number, and the like. For the application scenes of passenger stations, bus stations and parking lots, the target object is a target vehicle, and the identification information of the target object is the license plate number and the like of the vehicle. For the application scenarios of the dock and the port, the target object is a target ship, and the identification information thereof may be a ship identification number, but is not limited thereto.
In practical applications, if the target object enters the designated area, the designated area management department notifies the server device 40a of the identification information of the target object, the entry time, and the entry point. For example, the management of the passenger station may notify the server 40a of the license plate number, the time of arrival, and the entrance of the destination vehicle. Based on this, the server device 40a acquires an image including the target object from the image acquired by the image acquisition device disposed at the entrance of the target object, according to the entrance time of the target object and the image characteristics of the target object. In this process, the server device 40a may also obtain identification information of the target object.
In addition, in some embodiments, before the target object enters the designated area, the ADS-B device on the target object may also send the identification information of the target object and the kinematic parameters of the target object and the GIS information of the target object before the target object enters the designated area to the server device 40 a. The kinematic parameters of the target object before entering the designated area comprise: at least one of a moving speed, a traveling direction, and an acceleration of the target object before entering the designated area. The server device 40a may calculate the time when the target object enters the designated area according to the kinematic parameters and the GIS information of the target object before entering the designated area. Further, the server device 40a may obtain an image including the target object from the image captured by the image capturing device disposed at the entrance of the specified area according to the time when the target object enters the specified area and the image characteristics of the target object. In this process, the server device 40a may also obtain identification information of the target object.
In some application scenarios, the designated area portal is provided with an object recognition device. When the mobile object enters the entrance of the designated area, the object recognition means may acquire the entry time of the mobile object and the identification information of the mobile object. Accordingly, when the target object enters the entrance of the designated area, the object recognition device may acquire the identification information of the target object and the entry time of the target object, and transmit the identification information of the target object and the entry time of the target object to the server device 40 a. Further, the server device 40a may acquire an image including the target object from the image acquired by the image acquisition device deployed at the entrance of the designated area according to the entry time of the target object and the image feature of the target object. For example, an entrance of a passenger station is provided with a vehicle recognition device. When the target vehicle enters the entrance of the designated area, the vehicle recognition device may acquire the license plate number of the target vehicle and the arrival time of the target vehicle, and transmit the license plate number of the target object and the arrival time of the target object to the server device 40 a.
Similarly, when the target object exits the designated area, the designated area management department may also provide the identification information of the target object, the parked position, and the start time of the target object exiting the designated area to the server device 40 a. Based on this, the server device 40a may acquire an image including the target object from the images acquired from the stop position where the acquisition field of view is set to cover the target aircraft, according to the start time when the target object exits the designated area and the stop position of the target object.
However, other image capturing devices than the two more specific positions cannot recognize whether the target object is included in the images captured by these image capturing devices according to the above-mentioned manner. In the embodiment of the present application, an image capturing device disposed at an entrance of a specified area and an image capturing device capturing a parking position where a field of view covers a target object are collectively defined as a reference image capturing device. If the target object is in a stage of entering the designated area, the reference image acquisition equipment is image acquisition equipment arranged at an entrance of the designated area; and if the target object is in a stage of exiting the designated area, the reference image acquisition equipment is image acquisition equipment with an acquisition view covering the stop position of the target object.
The following describes an exemplary method for the server device 40a to process the image to be recognized, taking the image to be recognized, which is captured by any one of the image capturing devices except the reference image capturing device, as an example. For convenience of description and distinction, the image capturing apparatus that captures the image to be recognized is defined as a target image capturing apparatus, i.e., the target image capturing apparatus is any image capturing apparatus disposed in the airport except for the reference image capturing apparatus.
In this embodiment, the server device 40a may acquire a plurality of history images including the target object acquired before the image to be recognized. The plurality of historical images can be acquired by the target image acquisition device, or acquired by other image acquisition devices arranged in front of the target image acquisition device, or acquired by both the target image acquisition device and other image acquisition devices arranged in front of the target image acquisition device. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be recognized. Wherein, M is not less than 2 and is an integer, and the specific value can be flexibly set and is not limited herein. It is to be noted that, in the embodiments of the present application, the other image capturing apparatuses in front of the target image capturing apparatus refer to the other image capturing apparatuses through which the target object passes before passing through the target image capturing apparatus this time.
Further, the server device 40a may determine a historical movement trajectory of the target object in the historical time period according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. For example, the collection time period of the plurality of historical images is 13:00-13:05 in 7, 12 and 7 months in 2019, and the historical time period is 13:00-13:05 in 7, 12 and 12 months in 2019. Further, the server device 40a may also determine whether the image to be recognized includes the target object according to the historical movement track of the target object.
It should be noted that, for specific embodiments in which the server device 40a determines the historical movement trajectory of the target object in the historical time period according to the multiple historical images, and determines whether the image to be identified includes the target object according to the historical movement trajectory, reference may be made to relevant contents of the foregoing embodiments, and details are not repeated here.
The monitoring system provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images which are collected before the image to be identified and contain the target object; and determining whether the image to be identified contains the target object or not according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and the subsequent checking of the condition of the target object is facilitated. For example, the driving condition of the target object in the airport may be verified based on the images including the target object acquired by the plurality of image acquisition devices in the airport, and the like.
On the other hand, if the plurality of historical images are acquired by other image acquisition devices arranged in front of the target image acquisition device, or the plurality of historical images are acquired by both the target image acquisition device and other image acquisition devices arranged in front of the target image acquisition device, cross-border recognition and tracking of the target object can be realized by using the image processing method provided by the embodiment of the application.
In this embodiment, for an image including a target object acquired by a reference image acquisition device, the server device 40a may acquire identification information of the target object from a designated area management department and/or an ADS-B device on the target object. Then, the server device 40a determines images including the target object from the images acquired by the image acquisition devices behind the reference image acquisition device in sequence according to the images including the target object acquired by the reference image acquisition device, and further obtains the images including the target object acquired by the plurality of image acquisition devices in the designated area. Further, the server device 40a may also add the identification information of the target object as an add-on tag to an area where the target object is located in the image containing the target object acquired by the plurality of image acquisition devices; and generating a video abstract of the target object according to the image with the external label. Therefore, when the video corresponding to the target object needs to be viewed, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel can optimize an airplane scheduling process according to a video abstract of a target airplane; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when the video abstract of the target object is generated, the video abstract of the target object may be generated according to the time stamps of the images including the target object, which are acquired by the image acquisition devices, and according to the chronological order.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 5, the computer apparatus includes: a memory 50a and a processor 50 b. The memory 50a is used for storing computer programs.
The processor 50b is coupled to the memory 50a for executing a computer program for: acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area; acquiring a plurality of historical images collected before an image to be identified; the plurality of historical images are images which are collected in the process that the target object moves in the designated area and contain the target object; determining the historical movement track of the target object in the historical time period according to the plurality of historical images; and determining whether the image to be identified contains the target object or not according to the historical movement track.
Optionally, the multiple historical images are M historical images with the acquisition time being the closest to the acquisition time of the image to be identified; wherein M is not less than 2 and is an integer.
Further, the plurality of historical images are acquired by at least one other image acquisition device deployed in the designated area and located in front of the target image acquisition device.
In some embodiments, when determining whether the image to be recognized includes the target object, the processor 50b is specifically configured to: predicting a first position to which a target object moves when an image to be identified is acquired according to a historical movement track; and determining whether the image to be recognized contains the target object or not according to the image characteristics and the first position of the target object.
Further, the processor 50b, when predicting the first position to which the target object moves when the image to be recognized is acquired, is specifically configured to: acquiring timestamps of the target object passing through each historical track point and the position of each historical track point from the historical moving track; calculating the kinematic parameters of the target object in the historical time period according to the timestamp of the target object passing through each historical track point and the position of each historical track point; and predicting a first position to which the target object moves when the image to be recognized is acquired according to the kinematic parameters of the target object in the historical time period and the position of at least one historical track point in the historical track points.
Correspondingly, when determining whether the image to be recognized contains the target object, the processor 50b is specifically configured to: identifying at least one candidate object from the image to be identified according to the image characteristics of the target object; converting the pixel coordinates of at least one candidate object in the image to be identified into the coordinates of at least one candidate object in a preset coordinate system according to the homography matrix of the image acquisition equipment corresponding to the image to be identified; and determining whether a target object exists in the at least one candidate object according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system.
Further, when determining whether the at least one candidate object includes the target object, the processor 50b is specifically configured to: calculating the distance between the at least one candidate object and the first position according to the coordinates of the first position under the preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system; and if the distance between the at least one candidate object and the first position is less than or equal to the preset distance threshold, determining that the target object exists in the at least one candidate object. Correspondingly, if the distance between the at least one candidate object and the first position is larger than the preset distance threshold, it is determined that the target object does not exist in the at least one candidate object.
Further, in the case that the target object exists in the at least one candidate object, the processor 50b is further configured to: selecting the candidate object with the smallest distance from the first position as the target object from the at least one candidate object.
In other embodiments, the processor 50b is specifically configured to, when determining the historical movement trajectory of the target object: converting pixel coordinates of the target object in the plurality of historical images into coordinates of each historical track point, through which the target object passes, in a preset coordinate system according to a homography matrix of the image acquisition device corresponding to each of the plurality of historical images, and using the coordinates as the position of each historical track point; taking the timestamps of the plurality of historical images as the timestamps of the target object passing through each historical track point; and generating a historical moving track of the target object according to the timestamp of the target object passing through each historical track point and the position of each historical track point.
Further, before determining whether the target object is contained in the image to be recognized, the processor 50b is further configured to: acquiring GIS track information of a target object in a historical time period; and correcting the historical movement track according to the GIS track information.
When acquiring the GIS trajectory information of the target object in the historical time period, the processor 50b is specifically configured to include at least one of the following operations: acquiring GIS track information of a target object in a historical period from ADS-B equipment on the target object; and acquiring GIS track information of the target object in a historical time period from the scene monitoring radar deployed in the specified area.
Further, if the target image capturing apparatus is the first image capturing apparatus disposed after the reference image capturing apparatus of the designated area entrance, the processor 50b, when acquiring the plurality of history images captured before the image to be recognized, is specifically configured to: calculating the time of the target object entering the entrance of the designated area according to the kinematic parameters of the target object before the target object enters the entrance of the designated area, which are sent by the ADS-B equipment on the target object; and acquiring a plurality of historical images from the images acquired by the reference image acquisition equipment according to the time when the target object enters the entrance of the designated area.
In still other embodiments, the processor 50b is further configured to: adding identification information of the target object as a plug-in label to an area where the target object is located in an image which is collected by each image collection device and contains the target object in the designated area; and generating a video abstract of the target object in the designated area according to the plurality of historical images with the external tags and the image to be identified.
In the embodiment of the present application, the designated area may be an airport, and accordingly, the target object is a target airplane.
Further, the processor 50b is further configured to: and acquiring the movement tracks of other airplanes in the current airport, planning the navigation path of the target airplane according to the movement tracks of the other airplanes in the current airport, and guiding the target airplane to move along the planned navigation path.
In some optional embodiments, as shown in fig. 5, the computer device may further include: communication component 50c, power component 50 d. In some embodiments, the computer device is a computer, workstation, etc., and the computer device may also include optional components such as a display 50e and an audio component 50 f. Only some of the components shown in fig. 5 are schematically depicted, and it is not meant that the computer device must include all of the components shown in fig. 5, nor that the computer device only includes the components shown in fig. 5.
The computer device provided by the embodiment can determine the historical movement track of the target airplane according to a plurality of historical images which are collected before the image to be identified and contain the target airplane; and determining whether the image to be identified contains the target airplane or not according to the historical movement track of the target airplane, so that the image containing the target airplane can be screened out, the identification of the target airplane is realized, and the subsequent checking of the condition of the target airplane is facilitated. For example, the driving condition of the target airplane in the airport can be verified based on the images including the target airplane collected by a plurality of cameras in the airport, and the like.
On the other hand, if the plurality of historical images are acquired by other cameras arranged in front of the target camera, or the plurality of historical images are acquired by both the target camera and other cameras arranged in front of the target camera, the cross-border recognition and tracking of the target aircraft can be realized by using the image processing method provided by the embodiment of the application.
In addition, the image acquisition equipment is the existing facility in the designated area, and the image processing method provided by the embodiment of the application does not need to additionally add the image acquisition equipment, namely, the image acquisition cost is not required to be additionally input.
Fig. 6 is a schematic structural diagram of another computer device according to an embodiment of the present application. As shown in fig. 6, the computer apparatus includes: a memory 60a and a processor 60 b. The memory 60a is used for storing computer programs.
The processor 60b is coupled to the memory 60a for executing computer programs for: acquiring an image to be identified, wherein the image to be identified is acquired by first image acquisition equipment arranged in a designated area; acquiring a plurality of historical images which are acquired by at least one second image acquisition device when a target object moves in the designated area and contain the target object; the second image acquisition equipment is arranged in the designated area and is positioned in front of the first image acquisition equipment; determining a historical movement track of the target object according to the plurality of historical images; and determining whether the target object is contained in the image to be recognized or not according to the historical movement track.
Optionally, the at least one second image capturing device is an image capturing device located before and adjacent to the first image capturing device.
Further, if the second image capturing device is a reference image capturing device disposed at the entrance of the designated area, the processor 60b is specifically configured to, when acquiring a plurality of history images including the target object and acquired when the target object moves in the designated area by at least one second image capturing device: calculating the time of the target object entering the appointed area entrance according to the kinematic parameters of the target object before entering the appointed area entrance, which are sent by the ADS-B equipment on the target object; and acquiring the plurality of target images from the images acquired by the second image acquisition equipment according to the time when the target object enters the entrance of the designated area.
In some embodiments, the processor 60b is further configured to: adding the identification information of the target object as a plug-in label to an area where the target object is located in an image which is acquired by each image acquisition device in the designated area and contains the target object; and generating a video abstract of the target object in the designated area according to the image with the plug-in label.
It should be noted that, the specific implementation of determining, by the processor 60b, the historical movement trajectory of the target object in the historical time period according to the multiple historical images, and determining whether the image to be recognized includes the target object according to the historical movement trajectory may refer to relevant contents of the foregoing embodiments, and details are not repeated here.
In some optional embodiments, as shown in fig. 6, the computer device may further include: a communication component 60c, and a power component 60 d. In some embodiments, the computer device is a computer, workstation, etc., and the computer device may also include optional components such as a display 60e and an audio component 60 f. Only some of the components shown in fig. 6 are schematically shown, and it is not meant that the computer device must include all of the components shown in fig. 6, nor that the computer device only includes the components shown in fig. 6.
The computer device provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images which are provided by other image acquisition devices and are arranged in front of the image acquisition device corresponding to the image to be recognized and contain the target object; and whether the image to be identified contains the target object is determined according to the historical movement track of the target object, so that the image containing the target object can be screened out, cross-border identification and tracking of the target object are realized, and subsequent checking of the condition of the target object is facilitated. For example, the driving condition of the target object in the airport may be verified based on the images including the target object acquired by the plurality of image acquisition devices in the airport, and the like.
Fig. 7 is a schematic structural diagram of another computer device provided in the embodiment of the present application. As shown in fig. 5, the computer apparatus includes: a memory 70a and a processor 70 b. The memory 70a is used for storing computer programs.
The processor 70b is coupled to the memory 70a for executing a computer program for: acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area; acquiring a historical movement track of a target object before an image to be identified; determining whether the image to be identified contains the target object or not according to the historical movement track; and under the condition that the image to be recognized contains the target object, generating a video abstract of the target object in the designated area according to the image to be recognized and other images containing the target object.
Optionally, when the processor 70b generates the video summary of the target object in the designated area, it is specifically configured to: adding identification information of a target image into the target image as an add-on label, wherein the target image is an image which is acquired by each image acquisition device in a designated area and contains a target object; and generating a video abstract of the target object in the designated area according to the target image with the external label.
Alternatively, the processor 70b may add an outsert to the area of the target object in the target image.
In some optional embodiments, as shown in fig. 7, the computer device may further include: a communication component 70c, and a power component 70 d. In some embodiments, the computer device is a computer, workstation, etc., and the computer device may also include optional components such as a display 70e and an audio component 70 f. Only some of the components shown in fig. 7 are schematically shown, and it is not meant that the computer device must include all of the components shown in fig. 7, nor that the computer device only includes the components shown in fig. 7.
The computer device provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images which are provided by other image acquisition devices and are arranged in front of the image acquisition device corresponding to the image to be recognized and contain the target object; and determining whether the image to be identified contains the target object or not according to the historical moving track of the target object, so that the image containing the target object can be screened out, and the video abstract of the target object can be generated according to the image containing the target object, thereby not only realizing cross-environment identification and tracking of the target object, but also utilizing the video abstract of the target object to carry out subsequent verification on the condition of the target object. For example, the driving condition of the target object in the airport may be verified based on the images including the target object acquired by the plurality of image acquisition devices in the airport, and the like.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the computer device. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the computer device and other devices. The computer device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power component is configured to provide power to various components of a computer device. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for a smart mirror with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (32)

1. An image processing method, comprising:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images collected before the image to be identified; the plurality of historical images are images which are collected during the process that a target object moves in the designated area and contain the target object;
determining a historical movement track of the target object in a historical time period according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
2. The method according to claim 1, wherein the determining whether the target object is included in the image to be recognized according to the historical movement track comprises:
predicting a first position to which the target object moves when the image to be identified is acquired according to the historical movement track;
and determining whether the target object is contained in the image to be recognized or not according to the image characteristics of the target object and the first position.
3. The method of claim 2, wherein predicting a first position to which the target object moves when the image to be recognized is acquired according to the historical movement trajectory comprises:
acquiring a timestamp of the target object passing through each historical track point and a position of each historical track point from the historical moving track;
calculating the kinematic parameters of the target object in the historical time period according to the timestamp of the target object passing through each historical track point and the position of each historical track point;
and predicting a first position to which the target object moves when the image to be identified is acquired according to the kinematic parameters of the target object in the historical time period and the position of at least one historical track point in the historical track points.
4. The method according to claim 2, wherein the determining whether the target object is included in the image to be recognized according to the image feature of the target object and the first position comprises:
identifying at least one candidate object from the image to be identified according to the image characteristics of the target object;
converting the pixel coordinates of the at least one candidate object in the image to be recognized into the coordinates of the at least one candidate object in the preset coordinate system according to the homography matrix of the image acquisition equipment corresponding to the image to be recognized;
and determining whether the target object exists in the at least one candidate object according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system.
5. The method according to claim 4, wherein the determining whether the at least one candidate object includes the target object according to the coordinates of the first position in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system comprises:
calculating the distance between the at least one candidate object and the first position according to the coordinates of the first position in a preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system;
determining that the target object exists in the at least one candidate object if a distance smaller than or equal to a preset distance threshold exists in the distance between the at least one candidate object and the first position;
if the distances between the at least one candidate object and the first position are all larger than a preset distance threshold value, determining that the target object does not exist in the at least one candidate object.
6. The method of claim 5, wherein in the presence of the target object in the at least one candidate object, the method further comprises:
selecting, from the at least one candidate object, a candidate object having a smallest distance from the first position as the target object.
7. The method according to claim 3, wherein the determining the historical movement trajectory of the target object from the plurality of historical images comprises:
converting pixel coordinates of the target object in the plurality of historical images into coordinates of each historical track point passed by the target object in a preset coordinate system according to a homography matrix of the image acquisition equipment corresponding to each of the plurality of historical images, and taking the coordinates as the position of each historical track point;
taking the timestamps of the plurality of historical images as the timestamps of the target object passing through each historical track point;
and generating a historical moving track of the target object according to the timestamp of the target object passing through each historical track point and the position of each historical track point.
8. The method of claim 7, before determining whether the target object is contained in the image to be recognized according to the historical movement trajectory, the method further comprising:
acquiring GIS track information of the target object in the historical time period;
and correcting the historical movement track according to the GIS track information.
9. The method according to claim 8, wherein the obtaining GIS trajectory information of the target object in the historical time period comprises at least one of:
acquiring GIS track information of the target object in the historical time period from ADS-B equipment on the target object;
and acquiring GIS track information of the target object in the historical time period from a scene monitoring radar deployed in the specified area.
10. The method according to claim 9, wherein if the target image capturing device is a first image capturing device disposed after a reference image capturing device at the entrance of the designated area, the acquiring a plurality of history images captured before the image to be recognized comprises:
calculating the time of the target object entering the appointed area entrance according to the kinematic parameters of the target object before entering the appointed area entrance, which are sent by the ADS-B equipment on the target object;
and acquiring the plurality of historical images from the images acquired by the reference image acquisition equipment according to the time when the target object enters the entrance of the designated area.
11. The method according to any one of claims 1-10, further comprising:
adding the identification information of the target object as a plug-in label to an area where the target object is located in an image which is acquired by each image acquisition device and contains the target object in the designated area;
and generating a video abstract of the target object in the designated area according to the plurality of historical images with the external tags and the image to be identified.
12. The method according to any one of claims 1 to 10, wherein the plurality of historical images are M historical images with the acquisition time being the closest to the acquisition time of the image to be identified; wherein M is not less than 2 and is an integer.
13. The method of claim 12, wherein the plurality of historical images are acquired by at least one other image acquisition device deployed within the designated area and located in front of the target image acquisition device.
14. The method of any one of claims 1-10, wherein the designated area is an airport; the target object is a target aircraft.
15. The method of claim 14, further comprising:
acquiring the movement tracks of other airplanes in the current airport;
planning a navigation path of the target aircraft according to the movement tracks of other aircraft in the current airport, and guiding the target aircraft to move along the navigation path.
16. An image processing method, comprising:
acquiring an image to be identified, wherein the image to be identified is acquired by first image acquisition equipment arranged in a designated area;
acquiring a plurality of historical images which are acquired by at least one second image acquisition device when a target object moves in the designated area and contain the target object; the second image acquisition equipment is arranged in the designated area and is positioned in front of the first image acquisition equipment;
determining a historical movement track of the target object according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
17. The method of claim 16, wherein the at least one second image acquisition device is an image acquisition device located before and adjacent to the first image acquisition device.
18. The method according to claim 17, wherein if the second image capturing device is a reference image capturing device disposed at an entrance of the designated area, the acquiring a plurality of history images including the target object captured by at least one second image capturing device while the target object moves within the designated area comprises:
calculating the time of the target object entering the appointed area entrance according to the kinematic parameters of the target object before entering the appointed area entrance, which are sent by the ADS-B equipment on the target object;
and acquiring the plurality of target images from the images acquired by the second image acquisition equipment according to the time when the target object enters the entrance of the designated area.
19. The method of claim 16, further comprising:
adding the identification information of the target object as a plug-in label to an area where the target object is located in an image which is acquired by each image acquisition device in the designated area and contains the target object;
and generating a video abstract of the target object in the designated area according to the image with the external label.
20. An image processing method, comprising:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a historical movement track of a target object before the image to be identified;
determining whether the image to be identified contains the target object or not according to the historical movement track;
and under the condition that the image to be recognized contains the target object, generating a video abstract of the target object in the specified area according to the image to be recognized and other images containing the target object.
21. The method according to claim 20, wherein the generating a video summary of the target object in the designated area according to the image to be recognized and other images containing the target object comprises:
adding the identification information of the target image into the target image as a plug-in label, wherein the target image is an image which is acquired by each image acquisition device in the designated area and contains the target object; and generating a video abstract of the target object in the designated area according to the target image with the external label.
22. The method according to claim 21, wherein the adding the identification information of the target image as a hang tag to the target image comprises:
and adding the plug-in label to the area of the target object in the target image.
23. A monitoring system, comprising: the system comprises a server device and a plurality of image acquisition devices arranged in a designated area;
the plurality of image acquisition devices are used for acquiring images in the designated area, and the images comprise moving objects appearing in the designated area;
the server device is configured to: acquiring an image to be identified, which is acquired by a target image acquisition device in a plurality of image acquisition devices; acquiring a plurality of historical images which are acquired before the image to be identified and contain the target object from the images acquired by the plurality of image acquisition devices; determining a historical movement track of the target object in a historical time period according to the plurality of historical images; and determining whether the target object is contained in the image to be recognized or not according to the historical movement track.
24. An airport monitoring system, comprising: the system comprises a server-side device and a plurality of cameras arranged in an airport;
the cameras are used for collecting images of all airplanes in the airport;
the server device is configured to: acquiring a plurality of historical images containing a target airplane before an image to be identified acquired by a target camera; determining the historical movement track of the target aircraft in a historical time period according to the plurality of historical images; determining whether the image to be identified contains the target airplane or not according to the historical movement track; the target camera is any one of the plurality of cameras other than the reference camera.
25. The method of claim 24, wherein the plurality of historical images are acquired by other cameras disposed in front of the target camera.
26. The system of claim 24, wherein the server device, prior to determining whether the image captured by the target camera contains the target aircraft, is further configured to:
determining a historical time period corresponding to the historical movement track; acquiring GIS track information of the target aircraft in the historical time period; and correcting the historical movement track according to the GIS track information.
27. The system of claim 26, further comprising: ADS-B equipment on the target aircraft and/or a scene monitoring radar arranged in the airport;
the ADS-B equipment on the target aircraft and/or a scene monitoring radar arranged in the airport are used for: sending the detected GIS track information of the target aircraft to the server-side equipment;
when the server device obtains the GIS track information of the target object in the historical time period, the server device is specifically configured to perform at least one of the following operations:
acquiring GIS track information of the target object in the historical time period from the ADS-B equipment;
and acquiring GIS track information of the target object in the historical time period from the scene monitoring radar.
28. The system according to any one of claims 24-27, wherein the server device is further configured to:
adding the identification information of the target aircraft serving as an external label to the area where the target aircraft is located in the image which is acquired by the plurality of cameras and contains the target aircraft;
and generating a video abstract of the target airplane according to the image with the external label.
29. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images collected before the image to be identified; the plurality of historical images are images which are collected during the process that a target object moves in the designated area and contain the target object;
determining a historical movement track of the target object in a historical time period according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track.
30. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, wherein the image to be identified is acquired by first image acquisition equipment arranged in a designated area;
acquiring a plurality of historical images which are acquired by at least one second image acquisition device when a target object moves in the designated area and contain the target object; the second image acquisition equipment is arranged in the designated area and is positioned in front of the first image acquisition equipment;
determining a historical movement track of the target object according to the plurality of historical images;
and determining whether the image to be identified contains the target object or not according to the historical movement track of the target object.
31. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area;
acquiring a historical movement track of a target object before the image to be identified;
determining whether the image to be identified contains the target object or not according to the historical movement track;
and under the condition that the image to be recognized contains the target object, generating a video abstract of the target object in the specified area according to the image to be recognized and other images containing the target object.
32. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-22.
CN201911073091.0A 2019-11-05 2019-11-05 Image processing method, device, system and storage medium Active CN111079525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911073091.0A CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911073091.0A CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN111079525A true CN111079525A (en) 2020-04-28
CN111079525B CN111079525B (en) 2023-05-30

Family

ID=70310696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911073091.0A Active CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111079525B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612675A (en) * 2020-05-18 2020-09-01 浙江宇视科技有限公司 Method, device and equipment for determining peer objects and storage medium
CN113095447A (en) * 2021-06-10 2021-07-09 深圳联合安防科技有限公司 Detection method and system based on image recognition
CN113282782A (en) * 2021-05-21 2021-08-20 三亚海兰寰宇海洋信息科技有限公司 Track acquisition method and device based on multi-point phase camera array

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043964A (en) * 2010-12-30 2011-05-04 复旦大学 Tracking algorithm and tracking system for taking-off and landing of aircraft based on tripod head and camera head
CN103714553A (en) * 2012-10-09 2014-04-09 杭州海康威视数字技术股份有限公司 Multi-target tracking method and apparatus
CN103927508A (en) * 2013-01-11 2014-07-16 浙江大华技术股份有限公司 Target vehicle tracking method and device
CN104424648A (en) * 2013-08-20 2015-03-18 株式会社理光 Object tracking method and device
CN105975633A (en) * 2016-06-21 2016-09-28 北京小米移动软件有限公司 Motion track obtaining method and device
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107045805A (en) * 2017-03-07 2017-08-15 安徽工程大学 A kind of monitoring method and system for small-sized aerial craft and thing drifted by wind
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN107967298A (en) * 2017-11-03 2018-04-27 深圳辉锐天眼科技有限公司 Method for managing and monitoring based on video analysis
CN109087335A (en) * 2018-07-16 2018-12-25 腾讯科技(深圳)有限公司 A kind of face tracking method, device and storage medium
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043964A (en) * 2010-12-30 2011-05-04 复旦大学 Tracking algorithm and tracking system for taking-off and landing of aircraft based on tripod head and camera head
CN103714553A (en) * 2012-10-09 2014-04-09 杭州海康威视数字技术股份有限公司 Multi-target tracking method and apparatus
CN103927508A (en) * 2013-01-11 2014-07-16 浙江大华技术股份有限公司 Target vehicle tracking method and device
CN104424648A (en) * 2013-08-20 2015-03-18 株式会社理光 Object tracking method and device
CN105975633A (en) * 2016-06-21 2016-09-28 北京小米移动软件有限公司 Motion track obtaining method and device
CN107045805A (en) * 2017-03-07 2017-08-15 安徽工程大学 A kind of monitoring method and system for small-sized aerial craft and thing drifted by wind
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN107967298A (en) * 2017-11-03 2018-04-27 深圳辉锐天眼科技有限公司 Method for managing and monitoring based on video analysis
CN109087335A (en) * 2018-07-16 2018-12-25 腾讯科技(深圳)有限公司 A kind of face tracking method, device and storage medium
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵士瑄等: "《低空小飞行物视频检测与追踪关键技术》" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612675A (en) * 2020-05-18 2020-09-01 浙江宇视科技有限公司 Method, device and equipment for determining peer objects and storage medium
CN111612675B (en) * 2020-05-18 2023-08-04 浙江宇视科技有限公司 Method, device, equipment and storage medium for determining peer objects
CN113282782A (en) * 2021-05-21 2021-08-20 三亚海兰寰宇海洋信息科技有限公司 Track acquisition method and device based on multi-point phase camera array
CN113095447A (en) * 2021-06-10 2021-07-09 深圳联合安防科技有限公司 Detection method and system based on image recognition
CN113095447B (en) * 2021-06-10 2021-09-07 深圳联合安防科技有限公司 Detection method and system based on image recognition

Also Published As

Publication number Publication date
CN111079525B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US11836858B2 (en) Incident site investigation and management support system based on unmanned aerial vehicles
CN111079525B (en) Image processing method, device, system and storage medium
KR101555450B1 (en) Method for providing arrival information, and server and display for the same
US20170161961A1 (en) Parking space control method and system with unmanned paired aerial vehicle (uav)
US20020109625A1 (en) Automatic method of tracking and organizing vehicle movement on the ground and of identifying foreign bodies on runways in an airport zone
WO2019186591A1 (en) Method and system for automating flow of operations on airports
US10867522B1 (en) Systems and methods for vehicle pushback collision notification and avoidance
CN113866758B (en) Scene monitoring method, system, device and readable storage medium
CN113286081B (en) Target identification method, device, equipment and medium for airport panoramic video
US20240028030A1 (en) System and apparatus for resource management
CN107393347A (en) Method and apparatus for detecting airport building area jamming
US20190362637A1 (en) Automated vehicle control
CN112085953A (en) Traffic command method, device and equipment
CN111047231A (en) Inventory method and system, computer system and computer readable storage medium
CN111951328A (en) Object position detection method, device, equipment and storage medium
WO2022143181A1 (en) Information processing method and apparatus, and information processing system
Saifutdinov et al. An emulation oriented method and tool for test of ground traffic control systems at airports
CN111857187B (en) T-beam construction tracking system and method based on unmanned aerial vehicle
CN114648572A (en) Virtual positioning method and device and virtual positioning system
Mund et al. Can LiDAR point clouds effectively contribute to safer apron operations?
CN117523500B (en) Monitoring system, method and storage medium of flight guarantee node
Lu et al. Field testing of vision-based surveillance system for ramp area operations
CN113573007A (en) Image processing method, device, apparatus, system and storage medium
CN113535863B (en) Moving track rendering method and device and storage medium
US20230366683A1 (en) Information processing device and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant