CN111079525B - Image processing method, device, system and storage medium - Google Patents

Image processing method, device, system and storage medium Download PDF

Info

Publication number
CN111079525B
CN111079525B CN201911073091.0A CN201911073091A CN111079525B CN 111079525 B CN111079525 B CN 111079525B CN 201911073091 A CN201911073091 A CN 201911073091A CN 111079525 B CN111079525 B CN 111079525B
Authority
CN
China
Prior art keywords
target object
image
historical
target
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911073091.0A
Other languages
Chinese (zh)
Other versions
CN111079525A (en
Inventor
孟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201911073091.0A priority Critical patent/CN111079525B/en
Publication of CN111079525A publication Critical patent/CN111079525A/en
Application granted granted Critical
Publication of CN111079525B publication Critical patent/CN111079525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Abstract

The embodiment of the application provides an image processing method, device, system and storage medium. In the embodiment of the application, according to a plurality of historical images which are acquired before an image to be identified and contain a target object, determining a historical movement track of the target object; and determining whether the image to be identified contains the target image according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and further, the subsequent verification of the condition of the target object is facilitated.

Description

Image processing method, device, system and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, system, and storage medium.
Background
In the existing transportation field, especially in places such as airports, railway stations, passenger stations, ports and wharfs where flights, vehicles or ships come to and come to stop, a plurality of cameras are often installed, and videos shot by the cameras can provide references for monitoring health conditions of the flights, vehicles or ships.
Taking an airport as an example, when the airport is subjected to airplane scheduling process optimization or airplane delay and other faults are verified, the video collected by the camera in the airport can be referred. However, in practical applications, it is impossible to identify an aircraft to be verified from a video captured by a camera.
Disclosure of Invention
Aspects of the present application provide an image processing method, apparatus, system, and storage medium, so as to implement identification of a target object, thereby facilitating subsequent verification of a situation of the target object.
The embodiment of the application provides an image processing method, which comprises the following steps:
acquiring an image to be identified acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images acquired before the image to be identified; the plurality of historical images are images containing the target object, wherein the images are acquired in the process that the target object moves in the appointed area;
according to the plurality of historical images, determining a historical movement track of the target object in a historical time period;
and determining whether the image to be identified contains the target object according to the historical movement track.
The embodiment of the application also provides an image processing method, which comprises the following steps: acquiring an image to be identified, wherein the image to be identified is acquired by a first image acquisition device arranged in a designated area;
acquiring a plurality of historical images containing a target object, wherein the historical images are acquired by at least one second image acquisition device when the target object moves in the appointed area; the second image acquisition device is arranged on the appointed area and is positioned in front of the first image acquisition device;
According to the plurality of historical images, determining a historical movement track of the target object;
and determining whether the image to be identified contains the target object according to the historical movement track.
The embodiment of the application also provides a monitoring system, which comprises: the server device and the plurality of image acquisition devices are arranged in the designated area;
the plurality of image acquisition devices are used for acquiring images in a specified area, and the images comprise moving objects appearing in the specified area;
the server device is configured to: acquiring images to be identified acquired by target image acquisition equipment in a plurality of image acquisition equipment; acquiring a plurality of historical images containing the target object, which are acquired before the image to be identified, from the images acquired by the plurality of image acquisition devices; according to the plurality of historical images, determining a historical movement track of the target object in a historical time period; and determining whether the image to be identified contains the target object according to the historical movement track.
The embodiment of the application also provides an airport monitoring system, which is characterized by comprising: the server equipment and the cameras are arranged in the airport;
The cameras are used for collecting images of all airplanes in the airport;
the server device is configured to: acquiring a plurality of historical images containing a target plane before an image to be identified acquired by a target camera; according to the plurality of historical images, determining a historical movement track of the target aircraft in a historical time period; determining whether the image to be identified contains the target airplane or not according to the historical movement track; the target camera is any one of the plurality of cameras except the reference camera.
The embodiment of the application also provides a computer device, which comprises: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images acquired before the image to be identified; the plurality of historical images are images containing the target object, wherein the images are acquired in the process that the target object moves in the appointed area;
according to the plurality of historical images, determining a historical movement track of the target object in a historical time period;
And determining whether the image to be identified contains the target object according to the historical movement track.
Embodiments of the present application also provide a computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring an image to be identified, wherein the image to be identified is acquired by a first image acquisition device arranged in a designated area;
acquiring a plurality of historical images containing a target object, wherein the historical images are acquired by at least one second image acquisition device when the target object moves in the appointed area; the second image acquisition device is arranged on the appointed area and is positioned in front of the first image acquisition device;
according to the plurality of historical images, determining a historical movement track of the target object;
and determining whether the target object is contained in the image to be identified according to the historical movement track of the target object.
Embodiments also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform steps in the above-described image processing method.
In the embodiment of the application, according to a plurality of historical images which are acquired before an image to be identified and contain a target object, determining a historical movement track of the target object; and determining whether the image to be identified contains the target image according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and further, the subsequent verification of the condition of the target object is facilitated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1a is a schematic diagram of an airport monitoring system according to an embodiment of the present application;
FIG. 1b is a schematic diagram of a method for determining a first position according to an embodiment of the present application;
FIG. 1c is a schematic diagram of another airport monitoring system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 3a is a flowchart illustrating another image processing method according to an embodiment of the present disclosure;
FIG. 3b is a flowchart illustrating another image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a monitoring system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another computer device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of still another computer device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Aiming at the technical problem that the target object identification can not be carried out from the video shot by the camera in the prior transportation field, in some embodiments of the application, the historical movement track of the target object is determined according to a plurality of historical images which are acquired before the image to be identified and contain the target object; and determining whether the image to be identified contains the target image according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and further, the subsequent verification of the condition of the target object is facilitated.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of an airport monitoring system according to an embodiment of the present application. As shown in fig. 1a, the system comprises: a server device 10a and a plurality of cameras 10b provided in an airport. The structure of the airport, the arrangement positions and the number of cameras in the airport, and the implementation form of the cameras are only exemplary and not limited to the above. In practical applications, as shown in fig. 1a, an airport includes a terminal, a ferry vehicle (not shown in fig. 1 a), and the like.
In this embodiment, the server device 10a is a computer device capable of performing image processing, and generally has the capability of assuming services and securing the services. The server device 10a may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. The server device 10a may be another computing device having corresponding service capabilities, such as a terminal device (running an image processing program) such as a computer. In the embodiment of the present application, the relative positional relationship between the server device 10a and the airport is not limited, and the server device 10a may be disposed inside the airport or outside the airport.
In this embodiment, the server device 10a may perform online processing or offline processing on the image to be identified. Alternatively, a wireless connection may be between the server device 10a and each camera 10 b. Alternatively, the server device 10a may be communicatively connected to the camera 10b through a mobile network, and accordingly, the network system of the mobile network may be any one of 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4g+ (lte+), 5G, wiMax, and the like. Alternatively, the server device 10a may be communicatively connected to each camera 10b by bluetooth, wiFi, infrared, or the like.
Multiple cameras 10b may capture images within the airport. In the present embodiment, the processing is mainly performed on the images of the aircraft collected by the cameras 10b, and therefore, the processing procedure of the images of the respective aircraft in the airport collected by the plurality of cameras 10b will be described with emphasis.
In this embodiment, the plurality of cameras 10b may capture images of each aircraft within the airport, including images of each aircraft as it moves within the airport. For an aircraft, it is required to move in the airport from landing to landing on the apron, and the movement path is shown by the dotted line in fig. 1a, and generally the aircraft moves from the landing runway to the taxiway and then from the taxiway to the corresponding parking place in the apron. Of course, the aircraft also needs to move in the airport from the apron to take off, and the moving route is generally from the apron to the taxiway, from the taxiway to the runway to take off, and so on. The above description is only an exemplary path of travel of the aircraft within the airport, and does not illustrate that the aircraft is moving through only a few areas of runway, taxiway, tarmac, etc. while moving within the airport.
In practical application, since the distance between the camera in the airport and the aircraft moving in the airport is relatively long and the moving speed of the aircraft is relatively fast, the image of the aircraft during the moving process collected by the camera 10b cannot display the identification information of the aircraft, so in the prior art, the service end device 10a cannot identify the aircraft of interest from the image collected by the camera 10 b. In the embodiments of the present application, for convenience of description and distinction, an aircraft of interest is defined as a target aircraft.
In practical application, if the target aircraft is in the stage of landing and taxiing, the air management department will notify the service end device 10a of the identification information of the target aircraft, the landing time of the target aircraft and the landing runway. The identification information of the target aircraft comprises: the number of the target aircraft, the number of the flight, etc., but is not limited thereto. Based on this, the server device 10a acquires an image including the target aircraft from images acquired by cameras disposed on a runway on which the target aircraft is landing, based on the landing time of the target aircraft and the image characteristics of the target aircraft. In this process, the server device 10a may also acquire the identification information of the target aircraft.
In addition, the broadcast automatic correlation monitoring (Automatic dependent surveillance broadcast, ADS-B) device on the target aircraft may also send the identification information of the target aircraft and the kinematic parameters of the target aircraft before landing on the runway and the geographic information system (Geographic Information System, GIS) information of the target aircraft to the server device 10a before landing on the runway. Based on this, the kinematic parameters of the target aircraft before landing on the runway include: at least one of a movement speed, a traveling direction and an acceleration of the target aircraft before landing on the runway. The service end device 10a can calculate the time when the target aircraft lands at the entrance of the runway according to the kinematic parameters and the GIS information of the target aircraft before landing on the runway. Further, the service-side device 10a may acquire an image including the target aircraft from the images acquired by the cameras disposed at the runway threshold where the target aircraft lands, according to the time when the target aircraft lands at the runway threshold and the image characteristics of the target aircraft. In this process, the server device 10a may also acquire the identification information of the target aircraft.
If the aircraft is in the take-off and taxi stage, the air traffic control department will notify the service end device 10a of the identification information of the target aircraft, the berth of the target aircraft on the parking apron, and the take-off time of the target aircraft. Based on this, the server device 10a can acquire an image including the target aircraft from the images acquired by the cameras provided to acquire the view covering the aircraft landing on the parking apron, according to the take-off time of the target aircraft and the position of the aircraft landing on the parking apron.
However, the cameras except for the two specific positions cannot recognize whether the images acquired by the cameras contain the target aircraft according to the above method. In the embodiment of the application, a camera arranged at an entrance of a runway and a camera for collecting a visual field to cover a berth of a target aircraft at a parking apron are uniformly defined as a reference camera. If the aircraft is in a landing and sliding stage, the reference camera is a camera arranged at an entrance of an airport runway; if the aircraft is in a take-off and taxiing stage, the reference camera is a camera for collecting vision field and covering the berthing position of the target aircraft on the parking apron.
The method of processing the image to be recognized by the server device 10a will be exemplarily described below taking the image to be recognized acquired by any one of the cameras other than the reference camera as an example. For convenience of description and distinction, a camera that collects an image to be recognized is defined as a target camera, that is, a target camera is any one of cameras other than a reference camera disposed in an airport.
In this embodiment, the server device 10a may acquire a plurality of history images including the target object acquired before the image to be identified. In this embodiment, a plurality of sheets means 2 sheets or more than 2 sheets. The plurality of historical images can be collected by the target camera or other cameras arranged in front of the target camera, or the plurality of historical images are collected by the target camera or other cameras arranged in front of the target camera. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be identified. Wherein M is more than or equal to 2, and is an integer, the specific value of which can be flexibly set, and the method is not limited. It should be noted that, in the embodiments of the present application, the other cameras in front of the target camera refer to the other cameras that the target aircraft passes before passing the target camera at this time. For example, if the target aircraft is in the stage of landing and taxiing, as shown by the dashed lines in fig. 1a, the target aircraft sequentially passes through the cameras numbered 1 and 2 … 7, and for the camera numbered 3, the front camera is the camera numbered 1 and 2; if the target aircraft is in the take-off and taxi phase, the target aircraft sequentially passes through cameras (not shown in fig. 1 a) numbered 7 and 6 … 1 during movement, and for camera numbered 3, the front camera is the camera numbered 4-7.
Further, the server device 10a may determine a historical movement track of the target aircraft in a historical period of time according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. For example, the collection time period of the plurality of historical images is 13:00-13:05 of 7 months and 12 days in 2019, and the historical time period is 13:00-13:05 of 7 months and 12 days in 2019. Further, the server device 10a may determine whether the image to be identified includes the target object according to the historical movement track of the target aircraft.
The airport monitoring system provided by the embodiment can determine the historical movement track of the target aircraft according to a plurality of historical images which are acquired before the image to be identified and contain the target aircraft; and determining whether the image to be identified contains the target aircraft according to the historical movement track of the target aircraft, so that the image containing the target aircraft can be screened out, the identification of the target aircraft is realized, and further, the follow-up verification of the condition of the target aircraft is facilitated. For example, the travel condition of the target aircraft in the airport may be verified based on images acquired by a plurality of cameras in the airport, including the target aircraft, and so on.
On the other hand, if the plurality of history images are collected by other cameras arranged in front of the target camera, or the plurality of history images are collected by both the target camera and the other cameras arranged in front of the target camera, the cross-border recognition and tracking of the target aircraft can be realized by using the image processing method provided by the embodiment of the application.
In addition, the camera is an existing facility in an airport, and the image processing mode provided by the embodiment of the application does not need to additionally arrange image acquisition equipment, namely does not need to additionally input image acquisition cost.
It should be noted that, in the embodiment of the present application, if the camera in front of the target camera is the reference camera, the above method may be used to obtain multiple historical images from the camera collected by the reference camera. If the cameras in front of the target camera are not reference cameras, and multiple historical images including the target aircraft are acquired from the images acquired by the cameras, whether the images acquired by the cameras include the target aircraft can be determined by using the method provided by the embodiment, and the multiple historical images including the target aircraft can be determined from the images. For example, for an image to be identified collected by a first camera behind a reference camera, a plurality of historical images including a target aircraft collected by the reference camera can be utilized to determine whether the image to be identified collected by the camera includes the target aircraft, and a plurality of target images including a target object are obtained from the image collected by the camera and used for processing the historical images of the image to be identified collected by a next camera; etc. Or, a plurality of historical images including the target aircraft, which are closest to the acquisition time interval of the image to be identified, can be acquired according to the time stamp of the image to be identified. Correspondingly, if the camera in front of the target camera is not the reference camera, the image processing method provided by the embodiment of the application can be utilized to determine whether the image acquired before the image to be identified contains the target aircraft or not, and a plurality of historical images containing the target aircraft, which are closest to the acquisition time interval of the image to be identified, are acquired from the target image containing the target aircraft.
In the embodiment of the present application, the server device 10a may predict, according to the historical movement track, a first position to which the target aircraft moves when the image to be identified is acquired; and determining whether the image to be identified contains the target object according to the image characteristics and the first position of the target object. For example, the server device 10a may utilize a Kalman (Kalman) algorithm to predict a first location to which the target aircraft is moving when acquiring the image to be identified.
Further, the server device 10a may obtain, from the historical movement track, a time stamp of the target aircraft passing through each historical track point and a position of each historical track point; and calculating the kinematic parameters of the target aircraft in the historical time period according to the time stamp of the target aircraft passing through each historical track point and the coordinates of each historical track point under a preset coordinate system. Wherein the kinematic parameters of the target aircraft over the historical period of time include: at least one of a moving speed, a traveling direction and an acceleration of the target aircraft in a historical time period; the preset coordinate system is a coordinate system where coordinates representing positions of the historical track points are located, and the coordinate system can be a coordinate system established by using any datum point and a datum plane, for example, the preset coordinate system can be a coordinate system established by using the center of an airport as an origin, using any two vertical lines on the ground as an x-axis and a y-axis, and using a direction vertical to the ground as a z-axis direction; alternatively, the preset coordinate system may be a world coordinate system, etc., but is not limited thereto.
Further, the server device 10a may predict the first position to which the target aircraft moves when the image to be recognized is acquired, based on the kinematic parameters of the target aircraft and the position of at least one of the historical track points. Further, the server device 10a may determine whether the image to be identified includes the target aircraft according to the image feature of the target aircraft and the coordinates of the first position in the preset coordinate system. Wherein the image features of the target aircraft include: color features, texture features, shape features, or spatial relationship features of the target aircraft, etc., but are not limited thereto.
In some embodiments, multiple aircraft with similar image features may be included in the image to be identified. For example, as shown in fig. 1a, for a camera that captures a field of view covering a tarmac, the captured image may include multiple planes, and the image characteristics of these planes may be very similar. Based on this, the server device 10a may identify at least one candidate aircraft from the images to be identified according to the image characteristics of the target aircraft; and converting pixel coordinates of at least one candidate aircraft in the image to be identified into coordinates of the candidate aircraft under a preset coordinate system according to a homography matrix of the camera corresponding to the image to be identified. Further, the server device 10a may determine whether the target aircraft exists in the at least one candidate aircraft according to the coordinates of the first location under the preset coordinate system and the coordinates of the at least one candidate aircraft under the preset coordinate system.
Further, the server device 10a may calculate a distance between the at least one candidate aircraft and the first location according to the coordinates of the first location under the preset coordinate system and the coordinates of the at least one candidate aircraft under the preset coordinate system. And if the distance between the at least one candidate aircraft and the first position is less than or equal to a preset distance threshold value, determining that a target aircraft exists in the at least one candidate aircraft. Correspondingly, if the distances between the at least one candidate aircraft and the first position are all larger than the preset distance threshold value, determining that the target aircraft does not exist in the at least one candidate aircraft.
Further, in the case where the target aircraft exists in the at least one candidate aircraft, the service-side device 10a may select, as the target aircraft, the candidate aircraft having the smallest distance from the first location from among the at least one candidate aircraft. For example, as shown in fig. 1B, assuming that the position shown by the five-pointed star is the first position to which the predicted target aircraft moves when acquiring the image to be recognized, the distance between the aircraft numbered B and the first position is smaller than the distances between the aircraft numbered a and C and the first position, and the aircraft numbered C is determined to be the target aircraft.
For the above-mentioned historical movement track of the target aircraft in the historical time period, in some embodiments, the server device 10a may convert, according to the homography matrix of the cameras corresponding to the plurality of historical images, the pixel coordinates of the target aircraft in the plurality of historical images into the coordinates of each historical track point passed by the target aircraft under the preset coordinate system, and take the coordinates of each historical track point under the preset coordinate system as the positions of the historical track points; further, the server device 10a takes the time stamps of the plurality of history images as the time stamps of the target aircraft passing through each history track point; further, a historical movement track of the target aircraft may be generated based on the time stamp of the target aircraft passing through each historical track point and the position of each historical track point. The homography matrix of the camera is a conversion matrix between pixel coordinates of an image acquired by the camera and a preset coordinate system. Further, the server device 10a may calculate, according to the pose and the internal and external parameters of the camera, a homography matrix corresponding to the camera. The pose of the camera comprises a position coordinate and a pose angle of the camera under a preset coordinate system. Further, the position coordinates and the attitude angle of the camera under the preset coordinate system comprise the installation position and the installation height of the camera.
In practical application, because the monitoring fields of view of the cameras may not coincide, especially for the situation that the plurality of historical images are acquired by the existing target camera and are also acquired by other cameras arranged in front of the target camera, the historical movement track may be discontinuous, and a larger error may exist between the first position predicted subsequently and the position to which the target aircraft is actually moved. Based on this, the server device 10a may acquire GIS track information of the target aircraft in the historical period; and correcting the historical movement track of the target aircraft in the historical time period by utilizing the GIS track information. Then, the server device 10a determines whether the image to be recognized contains the target aircraft according to the corrected historical movement track. Further, since the GIS track information is determined based on the world coordinate system, the predetermined coordinate system may be the world coordinate system in order to reduce the number of coordinate conversions.
Further, for some airports, as shown in FIG. 1c, it is possible to provide a scene monitoring radar 10c above the ground. The scene monitoring radar 10c can monitor the movement of the aircraft and the vehicle in the airport and provide GIS track information of the aircraft and the vehicle. Based thereon, in some embodiments, the airport monitoring system further comprises a scene monitoring radar 10c. The scene monitoring radar 10c transmits GIS track information of the target aircraft detected by the scene monitoring radar to the server device 10a. Accordingly, the server device 10c may obtain the GIS track information of the target aircraft in the historical time period from the GIS track information of the target aircraft sent by the scene monitoring radar 10c according to the time stamp of the GIS track information of the target aircraft, and correct the historical movement track information of the target aircraft in the historical time period by using the GIS track information of the target aircraft in the historical time period.
For some aircraft, an ADS-B device (not shown in FIGS. 1 a-1 c) may be mounted thereon. The ADS-B equipment can automatically acquire information such as GIS track information, altitude, speed, heading, identification information and the like of the aircraft, and broadcast the information to other aircraft or ground stations. In the embodiment of the present application, the service end device 10a may be a device in a ground station that receives information broadcast by an ADS-B device on an aircraft; other computer devices in communication with the devices in the ground station that receive the information broadcast by the ADS-B device are also possible. Regardless of the computer device, the server device 10a may obtain GIS track information of the aircraft sent by the ADS-B device on the aircraft. Based on this, in some embodiments, the server device 10a may obtain the GIS track information of the target aircraft in the historical period according to the timestamp of the GIS track information of the target aircraft sent by the ADS-B device on the target aircraft, and correct the historical movement track information of the target aircraft in the historical period by using the GIS track information of the target aircraft in the historical period.
In other embodiments, to improve the accuracy of the generated historical movement track of the target aircraft in the historical time period, the server device 10a may further combine the GIS track information of the target aircraft in the historical time period sent by the ADS-B device on the target aircraft with the GIS track information of the target aircraft in the historical time period detected by the scene monitoring radar 10c, and correct the historical movement track information of the target aircraft in the historical time period.
In the embodiment of the application, the scene monitoring radar 10c and the camera 10B are existing facilities in the airport, and the ADS-B device is an existing device on the aircraft, so that the historical movement track information of the target aircraft in the historical time period is corrected, and no additional positioning device is needed, namely no additional input of positioning cost is needed.
On the other hand, due to different operation habits of the on-board aircraft, the ADS-B equipment may be in a closed state in the landing or take-off process, so that the ADS-B equipment cannot acquire GIS information of the ground aircraft; although the airport scene monitoring radar can also position the ground plane, sometimes the signal of the scene monitoring radar is blocked, so that the positioning of the scene monitoring radar is inaccurate. Therefore, in this embodiment, when determining the historical movement track information of the target aircraft in the historical time period, the GIS information of the target aircraft acquired by the ADS-B device and/or the scene monitoring radar is fused with the above-mentioned image acquired by using the camera, so that the accuracy of the determined historical movement track of the target aircraft in the historical time period is improved.
In the embodiment of the present application, for the image including the target aircraft acquired by the reference camera, the server device 10a may acquire the identification information of the target aircraft from the air management department and/or the ADS-B device on the target aircraft. Then, the server device 10a sequentially determines the image including the target aircraft from the images acquired by the cameras behind the reference camera according to the image including the target aircraft acquired by the reference camera, so as to acquire the images including the target aircraft acquired by the plurality of cameras 10b in the airport. Further, the server device 10a may add the identification information of the target aircraft as an external tag to an area where the target aircraft is located in the images including the target aircraft acquired by the plurality of cameras 10 b; and generating a video abstract of the target aircraft according to the image with the plug-in tag. Therefore, when the video corresponding to the target aircraft needs to be checked, the video abstract of the target aircraft can be retrieved only by inputting the identification information of the target aircraft. Further, airport management personnel can optimize the aircraft scheduling process according to the video abstract of the target aircraft; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when generating the video abstract of the target aircraft, the video abstract of the target aircraft may be generated according to the time stamp of the image including the target object collected by each camera and the time sequence.
Further, the server device 10a may convert the pixel coordinates of the target aircraft in the image to be identified into coordinates of the target aircraft under a preset coordinate system according to the homography matrix of the camera for collecting the image to be identified, and use the coordinates as the positions of the track points corresponding to the target aircraft; and taking the time stamp of the image to be identified as the time stamp when the target airplane passes through the position. By adopting the same method, the service terminal device 10a can acquire each track point of the target aircraft in the airport; and further obtaining a moving track time sequence of the target aircraft in the airport. Further, the server device 10a may splice the moving track of the target aircraft in the airport with the background image, and combine the frames into a video, thereby obtaining a video summary of the target aircraft in the airport.
In this embodiment, the server device 10a may further obtain movement tracks of other airplanes in the current airport, plan a navigation path of the target airplane according to the movement tracks of the other airplanes in the current airport, and instruct the target airplane to move along the planned navigation path. Thus, the situation that the airplane in the airport collides in the moving process can be prevented. The moving track of other aircraft in the current airport may also be determined by determining the moving track of the target aircraft in the above embodiment.
In addition to the airport monitoring system embodiment provided in the embodiment of the present application, the embodiment of the present application further provides an image processing method, and in the following, from the perspective of the server device, the image processing method provided in the embodiment of the present application is exemplarily described.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application. The method is suitable for the server equipment. As shown in fig. 2, the method includes:
201. and acquiring an image to be identified acquired by target image acquisition equipment in the designated area.
202. Acquiring a plurality of historical images acquired before an image to be identified; the plurality of historical images are images containing the target object, which are acquired in the process of moving the target object in the designated area.
203. And determining the historical movement track of the target object in the historical time period according to the plurality of historical images.
204. And determining whether the image to be identified contains the target object according to the historical movement track.
In this embodiment, the designated area may be any physical location where a plurality of image capturing devices are deployed to capture images of a moving object. For example, the designated area may be a railway station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image pickup device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto.
In the present embodiment, a plurality of image capturing apparatuses may capture images within a specified area. In the present embodiment, the processing is mainly performed for the images of the moving objects appearing in the specified area acquired by the image acquisition apparatuses, and therefore, the following description focuses on the processing procedure of the images of the moving objects in the specified area acquired by the plurality of image acquisition apparatuses.
In this embodiment, the plurality of image capturing apparatuses may capture images of each moving object within the specified area, including images of each moving object while moving within the specified area. In practical application, since the distance between the image capturing device in the designated area and the moving object moving in the designated area is relatively far and the moving speed of the moving object is relatively fast, the image of the moving object captured by the image capturing device in the moving process cannot display the identification information of the moving object, so in the prior art, the moving object of interest cannot be identified from the image captured by the image capturing device. In the embodiment of the present application, for convenience of description and distinction, a moving object of interest is defined as a target object. The application scenes are different, the target objects are different, and the identification information of the target objects is different. For example, in the airport application scenario described above, the target object is a target aircraft; the identification information of the target object may be an aircraft number, a flight number, etc. of the target aircraft, but is not limited thereto. For application scenes of passenger stations, bus stations and parking lots, the target object is a target vehicle, and the identification information of the target vehicle is a license plate number of the vehicle and the like. For application scenes of wharfs and ports, the target object is a target ship, and the identification information of the target ship can be a ship identification number and the like, but the target ship is not limited to the ship identification number.
In practical application, if the target object enters the designated area, the designated area management department will notify the server device of the identification information, the entering time and the entering entrance of the target object. For example, the management of the passenger station may inform the server device of the license plate number, time of arrival, and entrance of the target vehicle. Based on this, an image including the target object can be acquired from the image acquired by the image acquisition apparatus disposed at the entrance of the target object, based on the entrance time of the target object and the image characteristics of the target object. In this process, identification information of the target object may also be acquired.
In addition, in some embodiments, the ADS-B device on the target object may also send the identification information of the target object and the kinematic parameters and GIS information of the target object before the target object enters the specified area to the server device before the target object enters the specified area. The kinematic parameters of the target object before entering the designated area comprise: at least one of a movement speed, a traveling direction, and an acceleration of the target object before entering the specified area. Based on this, the time for the target object to enter the specified area can be calculated from the GIS information and the kinematic parameters of the target object before entering the specified area. Further, an image including the target object may be acquired from images acquired by image acquisition apparatuses disposed at the entrance of the specified area according to the time when the target object is driven into the specified area and the image characteristics of the target object. In this process, identification information of the target object may also be acquired.
In some application scenarios, the designated area entry is provided with object recognition means. When the mobile object enters the specified area entrance, the object recognition apparatus may acquire the entry time of the mobile object and the identification information of the mobile object. Accordingly, when the target object enters the designated area entrance, the object recognition device may acquire the identification information of the target object and the target object entering time, and send the identification information of the target object and the target object entering time to the server device. Further, the server device may acquire an image including the target object from the images acquired by the image acquisition device disposed at the entrance of the designated area according to the driving-in time of the target object and the image characteristics of the target object. For example, an entrance of a passenger station is provided with a vehicle identification device. When the target vehicle enters the entrance of the designated area, the vehicle identification device can acquire the license plate number of the target vehicle and the target vehicle arrival time, and send the license plate number of the target object and the arrival time of the target object to the server-side equipment.
Similarly, in the stage that the target object leaves the designated area, the designated area management department also provides the identification information of the target object, the stop position and the starting time of the target object exiting the designated area to the server device. Based on the above, the server device may acquire an image including the target object from the images acquired by setting the acquisition view to cover the stop position of the target aircraft according to the start time of the target object exiting the specified area and the stop position of the target object.
However, other image capturing devices than the two more specific positions cannot recognize whether the images captured by the image capturing devices contain the target object according to the above manner. In the embodiment of the application, an image acquisition device arranged at an entrance of a designated area and an image acquisition device for acquiring a stop position of a field of view coverage target object are collectively defined as a reference image acquisition device. If the target object is in the stage of entering the appointed area, the reference image acquisition equipment is the image acquisition equipment arranged at the entrance of the appointed area; and if the target object is in a stage of exiting the designated area, the reference image acquisition device is an image acquisition device for acquiring a stop position of which the view field covers the target object.
The method of processing the image to be recognized will be described below by taking an image to be recognized acquired by any one of the image acquisition apparatuses other than the reference image acquisition apparatus as an example. For convenience of description and distinction, an image capturing apparatus that captures an image to be recognized is defined as a target image capturing apparatus, that is, the target image capturing apparatus is any image capturing apparatus provided in a specified area except for a reference image capturing apparatus.
In step 201, an image to be recognized acquired by a target image acquisition apparatus is first acquired. Optionally, the target image acquisition device may send the image to be identified to the server device online; or the server side equipment reads the image to be identified from the storage medium of the target image acquisition equipment. Next, a plurality of history images including the target object acquired before the image to be recognized is acquired in step 202. The plurality of history images can be acquired by the target image acquisition device, or can be acquired by at least one other image acquisition device arranged in the designated area and positioned in front of the target image acquisition device, or the plurality of history images can be acquired by the target image acquisition device and at least one other image acquisition device arranged in front of the target image acquisition device. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be identified. Wherein M is more than or equal to 2, and is an integer, the specific value of which can be flexibly set, and the method is not limited. The other image capturing devices in front of the target image capturing device refer to the other image capturing devices through which the target object passes before passing through the target image capturing device at this time.
Further, in step 203, a historical movement track of the target object within a historical time period may be determined according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. Further, in step 204, it may be determined whether the target object is included in the image to be identified according to the historical movement track of the target object.
In the embodiment, a historical movement track of a target object is determined according to a plurality of historical images which are acquired before an image to be identified and contain the target object; and determining whether the image to be identified contains the target object according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and further, the subsequent verification of the condition of the target object is facilitated. For example, the traveling condition of the target object in the airport may be verified based on images containing the target object acquired by a plurality of image acquisition devices in the airport, and so on.
On the other hand, if the plurality of history images are collected by other image collecting devices arranged in front of the target image collecting device, or the plurality of history images are collected by the target image collecting device and other image collecting devices arranged in front of the target image collecting device, the cross-border recognition and tracking of the target object can be realized by using the image processing method provided by the embodiment of the application.
In addition, the image acquisition equipment is an existing facility in the appointed area, and the image processing mode provided by the embodiment of the application does not need to additionally arrange the image acquisition equipment, namely does not need to additionally input the image acquisition cost. It should be noted that, in the embodiment of the present application, if the image capturing device in front of the target image capturing device is the reference image capturing device, the above method may be used to obtain multiple historical images from the image capturing device that is captured by the reference image capturing device. If the image capturing device in front of the target image capturing device is not the reference image capturing device, for the images captured from these image capturing devices, a plurality of history images including the target object are acquired, it is possible to determine whether the images captured from these image capturing devices include the target object or not, and determine a plurality of history images including the target object therefrom by using the manner provided in the present embodiment. For example, for the image to be identified collected by the first image collecting device after the reference image collecting device, a plurality of historical images including the target object collected by the reference image collecting device can be utilized to determine whether the image to be identified collected by the image collecting device includes the target object, and a plurality of target images including the target object are obtained from the image collected by the image collecting device and used for processing the historical images of the image to be identified collected by the next image collecting device; etc. Or, a plurality of historical images including the target object, which are closest to the acquisition time interval of the image to be identified, can be acquired according to the time stamp of the image to be identified. Accordingly, if the image capturing device in front of the target image capturing device is not the reference image capturing device, the image processing method provided by the embodiment of the present application may be used to determine whether the image captured before the image to be identified includes the target object, and obtain, from the target image including the target object, a plurality of history images including the target object with the closest capturing time interval to the image to be identified.
In the embodiment of the present application, an alternative implementation manner of step 204 is: predicting a first position to which a target object moves when an image to be identified is acquired according to the historical movement track; and determining whether the image to be identified contains the target object according to the image characteristics and the first position of the target object. For example, a Kalman (Kalman) algorithm may be utilized to predict a first position to which the target object moves when acquiring the image to be identified.
Further, a time stamp of the target object passing through each history track point and the position of each history track point can be obtained from the history movement track; and calculating the kinematic parameters of the target object in the historical time period according to the time stamp of the target object passing through each historical track point and the coordinates of each historical track point under a preset coordinate system. Wherein the kinematic parameters of the target object over the historical period of time include: at least one of a moving speed, a traveling direction and an acceleration of the target object in the history period; the description of the preset coordinate system may be referred to the relevant content of the above embodiment, and will not be repeated here.
Further, a first position to which the target object moves when the image to be recognized is acquired can be predicted according to the kinematic parameter of the target object and the position of at least one of the history track points. Furthermore, whether the image to be identified contains the target object can be determined according to the image characteristics of the target object and the coordinates of the first position under a preset coordinate system. Wherein the image features of the target object include: color features, texture features, shape features, or spatial relationship features of the target object, etc., but are not limited thereto.
In some embodiments, other moving objects in the image to be identified that may be similar to the image features of the target object. Based on this, at least one candidate object may be identified from the image to be identified according to the image characteristics of the target object; and converting pixel coordinates of at least one candidate object in the image to be identified into coordinates of the candidate objects under a preset coordinate system according to a homography matrix of the image acquisition equipment corresponding to the image to be identified. Further, it may be determined whether the target object exists in the at least one candidate object according to the coordinates of the first position under the preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system.
Further, a distance between the at least one candidate object and the first location may be calculated based on the coordinates of the first location under the preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system. And if the distance between the at least one candidate object and the first position is less than or equal to the preset distance threshold value, determining that the target object exists in the at least one candidate object. Correspondingly, if the distances between the at least one candidate object and the first position are all larger than a preset distance threshold value, determining that the target object does not exist in the at least one candidate object.
Further, in the case where a target object exists among the at least one candidate object, a candidate object having the smallest distance from the first position may be selected as the target object from among the at least one candidate object.
For the above-mentioned historical movement track of the target object in the historical time period, in some embodiments, according to the homography matrix of the image acquisition device corresponding to each of the plurality of historical images, the pixel coordinates of the target in the plurality of historical images are converted into the coordinates of each historical track point passing by the target object under a preset coordinate system, and the coordinates of each historical track point under the preset coordinate system are used as the positions of the historical track points; further, taking the time stamps of the plurality of historical images as the time stamps of the target object passing through each historical track point; further, a historical movement track of the target object may be generated according to the time stamp of the target object passing through each historical track point and the position of each historical track point. The homography matrix of the image acquisition device is a conversion matrix between pixel coordinates of an image acquired by the image acquisition device and a preset coordinate system. Further, a homography matrix corresponding to the image acquisition equipment can be calculated according to the pose and the internal and external parameters of the image acquisition equipment. The pose of the image acquisition device comprises a position coordinate and a pose angle of the image acquisition device under a preset coordinate system. Further, the position coordinates and attitude angles of the image capturing apparatus under the preset coordinate system include the installation position and the installation height of the image capturing apparatus.
In practical application, because the monitoring fields of the image capturing devices may not coincide, especially for the situation that the plurality of historical images are captured by the target image capturing device and captured by other image capturing devices arranged in front of the target image capturing device, the historical movement track may be discontinuous, and a larger error may exist between the first position predicted subsequently and the position to which the target object is actually moved. Based on the GIS track information, the GIS track information of the target object in the historical time period can be obtained; and correcting the historical movement track of the target object in the historical time period by utilizing the GIS track information. And then, determining whether the image to be identified contains the target object according to the corrected historical movement track. Further, since the GIS track information is determined based on the world coordinate system, the predetermined coordinate system may be the world coordinate system in order to reduce the number of coordinate conversions.
Further, for some designated areas, above the ground may be provided with scene monitoring radars. The scene monitoring radar can monitor the activity condition of the moving object in the designated area and provide GIS track information of the moving object. Based on this, in some embodiments, the GIS track information of the target object in the historical time period may be obtained from the GIS track information of the target object in the historical time period according to the time stamp of the GIS track information of the target object sent by the scene monitoring radar, and the historical movement track information of the target object in the historical time period may be corrected by using the GIS track information of the target object in the historical time period.
For some moving objects, an ADS-B device may be installed thereon. The ADS-B equipment can automatically acquire information such as GIS track information, altitude, speed, heading, identification information and the like of the object, and broadcast the information to other objects or ground stations. Based on this, in some embodiments, the GIS track information of the target object in the historical time period may be obtained from the GIS track information of the target object sent by the ADS-B device on the target object according to the timestamp of the GIS track information of the target object, and the historical movement track information of the target object in the historical time period may be corrected by using the GIS track information of the target object in the historical time period.
In other embodiments, in order to improve the accuracy of the generated historical movement track of the target object in the historical time period, the GIS track information of the target object in the historical time period sent by the ADS-B device on the target object and the GIS track information of the target object in the historical time period detected by the scene monitoring radar may be combined, so as to correct the historical movement track information of the target object in the historical time period.
In the embodiment of the application, the scene monitoring radar and the camera are existing facilities in the appointed area, and the ADS-B equipment is existing equipment on the target object, so that the historical movement track information of the target object in the historical time period is corrected, and no additional positioning equipment is needed, namely no additional input of positioning cost is needed.
On the other hand, due to different operation habits of operators of the target object, the ADS-B equipment may be in a closed state in the designated area, so that the ADS-B equipment cannot acquire GIS information of the target object in the designated area; although the scene monitoring radar in the designated area can also position the target object in the designated area, signals of the scene monitoring radar are blocked sometimes, so that the positioning of the scene monitoring radar is inaccurate. Therefore, in this embodiment, when determining the historical movement track information of the target object in the historical time period, the GIS information of the target object acquired by the ADS-B device and/or the scene monitoring radar is fused with the historical movement track of the determined target object in the historical time period by using the image acquired by the image acquisition device, so that the accuracy of the determined historical movement track of the target object in the historical time period is improved. In the embodiment of the application, for the image including the target object acquired by the reference image acquisition device, the server device may acquire the identification information of the target object from the designated area management department and/or the ADS-B device on the target object. And then, according to the images containing the target objects acquired by the reference image acquisition equipment, determining the images containing the target objects from the images acquired by the image acquisition equipment behind the reference image acquisition equipment in sequence, and further acquiring the images containing the target objects acquired by the plurality of image acquisition equipment in the appointed area. Further, the identification information of the target object can be used as an add-on tag to the area where the target object is located in the images containing the target object, which are acquired by the plurality of image acquisition devices; and generating a video abstract of the target object according to the image with the plug-in tag. Therefore, when the video corresponding to the target object is required to be checked, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel may optimize the aircraft scheduling process based on the video summary of the target aircraft; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when generating the video abstract of the target object, the video abstract of the target object may be generated according to the time stamp of the image including the target object acquired by each image acquisition device and the time sequence.
In the embodiment of the present application, the designated area may be an airport. Correspondingly, the target object is the target plane. Optionally, the moving tracks of other airplanes in the current airport can be obtained, and the navigation path of the target airplane can be planned according to the moving tracks of other airplanes in the current airport, and the target airplane can be guided to move along the planned navigation path. Thus, the situation that the airplane in the airport collides in the moving process can be prevented. The moving track of other aircraft in the current airport may also be determined by determining the moving track of the target aircraft in the above embodiment.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the image processing method described above.
Fig. 3a is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
301. And acquiring an image to be identified, wherein the image to be identified is acquired by a first image acquisition device arranged in a designated area.
302. Acquiring a plurality of historical images containing the target object, which are acquired by at least one second image acquisition device when the target object moves in a designated area; the second image acquisition device is arranged on the appointed area and is positioned in front of the first image acquisition device.
303. And determining the historical movement track of the target object according to the plurality of historical images.
304. And determining whether the image to be identified contains the target object according to the historical movement track.
In this embodiment, the designated area may be any physical location where a plurality of image capturing devices are deployed to capture images of a moving object. For example, the designated area may be a railway station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image pickup device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto. Among the images acquired by the reference image acquisition device in the specified area, the image containing the target object and the description of the reference image acquisition device are acquired, and the related contents of the above-described embodiments can be referred to.
The present embodiment focuses on a description of a processing method of an image to be recognized acquired by other image acquisition apparatuses than the reference image acquisition apparatus in a specified area. For convenience of description and distinction, an image capturing apparatus that captures an image to be recognized is defined as a first image capturing apparatus, that is, the first image capturing apparatus is any image capturing apparatus provided in a specified area except for a reference image capturing apparatus.
In step 301, an image to be identified acquired by a first image acquisition device is first acquired. Optionally, the first image acquisition device may send the image to be identified to the server device online; or the server side equipment reads the image to be identified from the storage medium of the first image acquisition equipment. Next, in step 302, a plurality of history images containing the target object, which are acquired before the image to be identified, are acquired from the images acquired by the at least one second image acquisition device. The second image acquisition device is other image acquisition devices which are arranged in the appointed area and positioned in front of the target image acquisition device, and the number of the second image acquisition device can be 1 or more. In this embodiment, a plurality means 2 or more. Preferably, the second image capturing device is an image capturing device located before and adjacent to the first image capturing device. For example, the second image capturing device is the first image capturing device located in front of the first image capturing device, and so on.
Further, the plurality of history images are M history images which are located before the first image capturing device and are closest to the time interval of capturing the image to be recognized in the images captured by the image capturing devices adjacent to the first image capturing device. Wherein M is more than or equal to 2 and is an integer. The explanation of the image capturing device in front of the first image capturing device may be referred to the relevant content of the above embodiment, and will not be repeated here.
Further, in step 303, a historical movement trajectory of the target object over a historical period of time may be determined from the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. Further, in step 304, it may be determined whether the target object is included in the image to be identified according to the historical movement track of the target object.
In this embodiment, a historical movement track of a target object is determined according to a plurality of historical images including the target object provided by other image acquisition devices arranged in front of an image acquisition device corresponding to an image to be identified; and determining whether the image to be identified contains the target object according to the historical movement track of the target object, so that the image containing the target object can be screened out, cross-border identification and tracking of the target object are realized, and further, the condition of the target object is facilitated to be checked later. For example, the traveling condition of the target object in the airport may be verified based on images containing the target object acquired by a plurality of image acquisition devices in the airport, and so on.
If the second image capturing device is a reference image capturing device disposed at the entrance of the designated area, an alternative embodiment of step 302 is: calculating the time of the target object entering the designated area entrance according to the kinematic parameters of the target object before entering the designated area entrance, which are sent by the ADS-B equipment on the target object; and acquiring a plurality of target images from the images acquired by the second image acquisition device according to the time when the target object enters the designated area entrance. For a description of the second image capturing device that is not the reference image capturing device, the specific implementation of step 302 may refer to the relevant content of the above embodiment, which is not described herein.
It should be noted that, for the specific implementation of steps 303 and 304, reference may be made to the relevant content of the above embodiments, which are not described herein again.
In the embodiment of the application, for the image including the target object acquired by the reference image acquisition device, the server device may acquire the identification information of the target object from the designated area management department and/or the ADS-B device on the target object. And then, according to the images containing the target objects acquired by the reference image acquisition equipment, determining the images containing the target objects from the images acquired by the image acquisition equipment behind the reference image acquisition equipment in sequence, and further acquiring the images containing the target objects acquired by the plurality of image acquisition equipment in the appointed area. Further, the identification information of the target object can be used as an add-on tag to the area where the target object is located in the images containing the target object, which are acquired by the plurality of image acquisition devices; and generating a video abstract of the target object according to the image with the plug-in tag. Therefore, when the video corresponding to the target object is required to be checked, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel may optimize the aircraft scheduling process based on the video summary of the target aircraft; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when generating the video abstract of the target object, the video abstract of the target object may be generated according to the time stamp of the image including the target object acquired by each image acquisition device and the time sequence.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the image processing method described above.
Fig. 3b is a flowchart of another image acquisition processing method according to an embodiment of the present application. As shown in fig. 3b, the method comprises:
s301, acquiring an image to be identified, which is acquired by target image acquisition equipment in a designated area.
S302, acquiring a historical movement track of the target object before the image to be identified.
S303, determining whether the image to be identified contains the target object according to the historical movement track.
And S304, generating a video abstract of the target object in the appointed area according to the image to be identified and other images containing the target object under the condition that the image to be identified contains the target object.
In this embodiment, the descriptions of steps S301 to S303 can be referred to the relevant content of the above embodiments, and will not be repeated here.
Further, in the present embodiment, a video summary of the target object within the specified area may be generated using the image containing the target object. If the image to be identified contains the target object, a video abstract of the target object in the designated area can be generated according to the image to be identified and other images containing the target object. Correspondingly, if the image to be identified does not contain the target object, generating a video abstract of the target object in the appointed area according to other images containing the target object. The other images containing the target object refer to images containing the target object, which are acquired by the image acquisition equipment in the designated area. In this way, the following action can be performed according to the video abstract. For example, a manager in the designated area may optimize the scheduling process of the mobile objects in the designated area according to the video abstracts of the target objects and the video abstracts of other objects in the designated area. For example, the condition of the target object in the designated area can be visually checked according to the video abstract of the target object, so that basis is provided for failure verification and the like of the target object.
Alternatively, the identification information of the target image may be added as an add-on tag to the target image, where the target image is an image including the target object acquired by each image acquisition device in the designated area. Alternatively, an add-on tag may be added to the region of the target image where the target object is located.
Further, a video abstract of the target object in the designated area can be generated according to the target image with the plug-in tag. Therefore, when the video corresponding to the target object is required to be checked, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel may optimize the aircraft scheduling process based on the video summary of the target aircraft; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft. If the image to be identified contains the target object, the target image contains the image to be identified; if the image to be identified does not contain the target object, the target image also contains the image to be identified.
Optionally, when generating the video abstract of the target object, the video abstract of the target object may be generated according to the time stamp of the image including the target object acquired by each image acquisition device and the time sequence. The specific implementation manner of generating the video abstract of the target object may refer to the related content of the foregoing embodiment, or may be generated by using the prior art in the field, which is not described herein again.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the image processing method described above.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 201 and 202 may be device a; for another example, the execution body of step 201 may be device a, and the execution body of step 202 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 201, 202, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
The image processing method provided by the embodiment of the application is not only suitable for the airport scene embodiment, but also suitable for any scene for acquiring the images of the moving object in the moving process, such as a railway station, a passenger station, a port, a wharf and the like, and the image acquisition equipment in the places is used for monitoring the vehicles or the ships. Based on this, the embodiment of the application also provides a monitoring system for monitoring the moving object in the designated area. Wherein the designated area may be any physical location where a plurality of image capturing devices are deployed to capture images of a moving object. For example, the designated area may be a railway station, a passenger station, a port, a dock, a parking lot, a warehouse, etc., wherein the image pickup device may be a visual sensor such as a camera, a laser sensor, or an infrared sensor, but is not limited thereto. The following illustrates a monitoring system applicable to any physical location where a plurality of image capturing devices are deployed to capture images of a moving object, provided in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a monitoring system according to an embodiment of the present application. As shown in fig. 4, the system includes: a server device 40a and a plurality of image capture devices 40b disposed within the airport. The structure of the designated area, the setting positions and numbers of the image capturing devices in the designated area, and the implementation form of the image capturing devices are only exemplary and not limited. The implementation manner of the server device 40a and the image capturing device 40b and the communication manner between them can be referred to the relevant content of the airport monitoring system, which is not described herein.
In the present embodiment, the plurality of image capturing devices 40b may capture images within a specified area. In the present embodiment, the processing is mainly performed for the images of the moving objects appearing in the specified area acquired by the image acquisition device 40b, and therefore, the following description focuses on the processing procedure of the images of the respective moving objects in the specified area acquired by the plurality of image acquisition devices 40 b.
In the present embodiment, the plurality of image capturing apparatuses 40b may capture images of each moving object within the specified area, including images of each moving object while moving within the specified area. In practical applications, since the distance between the image capturing device in the designated area and the moving object moving in the designated area is relatively long and the moving speed of the moving object is relatively fast, the image of the moving object during the moving process captured by the image capturing device 40b cannot display the identification information of the moving object, and therefore, in the prior art, the server device 40a cannot identify the moving object of interest from the image captured by the image capturing device 40 b. In the embodiment of the present application, for convenience of description and distinction, a moving object of interest is defined as a target object. The application scenes are different, the target objects are different, and the identification information of the target objects is different. For example, in the airport application scenario described above, the target object is a target aircraft; the identification information of the target object may be an aircraft number, a flight number, etc. of the target aircraft, but is not limited thereto. For application scenes of passenger stations, bus stations and parking lots, the target object is a target vehicle, and the identification information of the target vehicle is a license plate number of the vehicle and the like. For application scenes of wharfs and ports, the target object is a target ship, and the identification information of the target ship can be a ship identification number and the like, but the target ship is not limited to the ship identification number.
In practical applications, if the target object enters the designated area, the designated area management department notifies the server device 40a of the identification information, the entry time, and the entry point of the target object. For example, the management of the passenger station may inform the server device 40a of the license plate number, the time of entry, and the entrance of the target vehicle. Based on this, the server device 40a acquires an image including the target object from the images acquired by the image acquisition device disposed at the entrance of the target object, based on the entrance time of the target object and the image characteristics of the target object. In this process, the server device 40a may also acquire the identification information of the target object.
In addition, in some embodiments, the ADS-B device on the target object may also send the identification information of the target object and the kinematic parameters and GIS information of the target object to the server device 40a before the target object enters the specified area. The kinematic parameters of the target object before entering the designated area comprise: at least one of a movement speed, a traveling direction, and an acceleration of the target object before entering the specified area. The server device 40a may calculate the time for the target object to enter the specified area according to the kinematic parameters of the target object before entering the specified area and the GIS information. Further, the server device 40a may acquire an image including the target object from the images acquired by the image acquisition devices disposed at the entrance of the specified area according to the time when the target object enters the specified area and the image characteristics of the target object. In this process, the server device 40a may also acquire the identification information of the target object.
In some application scenarios, the designated area entry is provided with object recognition means. When the mobile object enters the specified area entrance, the object recognition apparatus may acquire the entry time of the mobile object and the identification information of the mobile object. Accordingly, when the target object enters the designated area entrance, the object recognition apparatus may acquire the identification information of the target object and the target object entry time, and transmit the identification information of the target object and the target object entry time to the server device 40a. Further, the server device 40a may acquire an image including the target object from the images acquired by the image acquisition devices disposed at the entrance of the designated area according to the driving-in time of the target object and the image characteristics of the target object. For example, an entrance of a passenger station is provided with a vehicle identification device. When the target vehicle enters the entrance of the designated area, the vehicle recognition device may acquire the license plate number of the target vehicle and the target vehicle arrival time, and transmit the license plate number of the target object and the arrival time of the target object to the server-side apparatus 40a.
Similarly, in the stage of the target object exiting the designated area, the designated area management department also supplies the identification information of the target object, the stop position, and the start time of the target object exiting the designated area to the server device 40a. Based on this, the server device 40a can acquire an image containing the target object from the images acquired at the stop position where the acquisition view is set to cover the target aircraft, according to the start time at which the target object exits the specified area and the stop position of the target object.
However, other image capturing devices than the two more specific positions cannot recognize whether the images captured by the image capturing devices contain the target object according to the above manner. In the embodiment of the application, an image acquisition device arranged at an entrance of a designated area and an image acquisition device for acquiring a stop position of a field of view coverage target object are collectively defined as a reference image acquisition device. If the target object is in the stage of entering the appointed area, the reference image acquisition equipment is the image acquisition equipment arranged at the entrance of the appointed area; and if the target object is in a stage of exiting the designated area, the reference image acquisition device is an image acquisition device for acquiring a stop position of which the view field covers the target object.
The method of processing the image to be recognized by the server device 40a will be exemplarily described below taking the image to be recognized acquired by any one of the other image acquisition devices except the reference image acquisition device as an example. For convenience of description and distinction, an image capturing apparatus that captures an image to be recognized is defined as a target image capturing apparatus, that is, the target image capturing apparatus is any image capturing apparatus provided in an airport other than a reference image capturing apparatus.
In this embodiment, the server device 40a may acquire a plurality of history images including the target object acquired before the image to be identified. The plurality of history images can be acquired by the target image acquisition device, or can be acquired by other image acquisition devices arranged in front of the target image acquisition device, or the plurality of history images are acquired by the target image acquisition device and other image acquisition devices arranged in front of the target image acquisition device. Preferably, the plurality of history images are M history images whose acquisition time is closest to the acquisition time interval of the image to be identified. Wherein M is more than or equal to 2, and is an integer, the specific value of which can be flexibly set, and the method is not limited. It should be noted that, in embodiments of the present application, the other image capturing devices in front of the target image capturing device refer to the other image capturing devices through which the target object passes before passing through the target image capturing device at this time.
Further, the server device 40a may determine a historical movement track of the target object in a historical period of time according to the plurality of historical images. The historical time period is a time period for collecting a plurality of historical images. For example, the collection time period of the plurality of historical images is 13:00-13:05 of 7 months and 12 days in 2019, and the historical time period is 13:00-13:05 of 7 months and 12 days in 2019. Further, the server device 40a may determine whether the image to be identified includes the target object according to the historical movement track of the target object.
It should be noted that, for the specific implementation of determining, by the server device 40a, the historical movement track of the target object in the historical time period according to the plurality of historical images and determining whether the image to be identified includes the target object according to the historical movement track, reference may be made to the related content of the above embodiment, which is not described herein again.
The monitoring system provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images which are acquired before the image to be identified and contain the target object; and determining whether the image to be identified contains the target object according to the historical movement track of the target object, so that the image containing the target object can be screened out, the identification of the target object is realized, and further, the subsequent verification of the condition of the target object is facilitated. For example, the traveling condition of the target object in the airport may be verified based on images containing the target object acquired by a plurality of image acquisition devices in the airport, and so on.
On the other hand, if the plurality of history images are collected by other image collecting devices arranged in front of the target image collecting device, or the plurality of history images are collected by the target image collecting device and other image collecting devices arranged in front of the target image collecting device, the cross-border recognition and tracking of the target object can be realized by using the image processing method provided by the embodiment of the application.
In the embodiment of the present application, for the image including the target object acquired by the reference image acquisition device, the server device 40a may acquire the identification information of the target object from the designated area management department and/or the ADS-B device on the target object. Then, the server device 40a sequentially determines the image including the target object from the images acquired by the image acquisition devices behind the reference image acquisition device according to the image including the target object acquired by the reference image acquisition device, and further acquires the images including the target object acquired by the plurality of image acquisition devices in the designated area. Further, the server device 40a may further add the identification information of the target object as an add-on tag to an area where the target object is located in the images including the target object acquired by the plurality of image acquisition devices; and generating a video abstract of the target object according to the image with the plug-in tag. Therefore, when the video corresponding to the target object is required to be checked, the video abstract of the target object can be retrieved only by inputting the identification information of the target object. For example, in an airport application scenario, airport management personnel may optimize the aircraft scheduling process based on the video summary of the target aircraft; or the condition of the target aircraft is visually checked according to the video abstract of the target aircraft, so that a basis is provided for checking delay or other faults of the target aircraft.
Optionally, when generating the video abstract of the target object, the video abstract of the target object may be generated according to the time stamp of the image including the target object acquired by each image acquisition device and the time sequence.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 5, the computer device includes: a memory 50a and a processor 50b. Wherein the memory 50a is for storing a computer program.
The processor 50b is coupled to the memory 50a for executing a computer program for: acquiring an image to be identified acquired by target image acquisition equipment in a designated area; acquiring a plurality of historical images acquired before an image to be identified; the plurality of historical images are images containing the target object, wherein the images are acquired in the process that the target object moves in a designated area; according to the plurality of historical images, determining a historical movement track of the target object in a historical time period; and determining whether the image to be identified contains the target object according to the historical movement track.
Optionally, the plurality of historical images are M historical images with the acquisition time being closest to the acquisition time of the image to be identified; wherein M is more than or equal to 2 and is an integer.
Further, the plurality of historical images are acquired by at least one other image acquisition device disposed within the designated area and located in front of the target image acquisition device.
In some embodiments, the processor 50b is specifically configured to, when determining whether the target object is included in the image to be identified: predicting a first position to which a target object moves when an image to be identified is acquired according to the historical movement track; and determining whether the image to be identified contains the target object according to the image characteristics and the first position of the target object.
Further, the processor 50b is specifically configured to, when predicting a first position to which the target object moves when the image to be identified is acquired: from the historical movement track, acquiring a time stamp of the target object passing through each historical track point and the position of each historical track point; calculating the kinematic parameters of the target object in the historical time period according to the time stamp of the target object passing through each historical track point and the position of each historical track point; and predicting a first position to which the target object moves when the image to be identified is acquired according to the kinematic parameters of the target object in the historical time period and the position of at least one historical track point in each historical track point.
Accordingly, the processor 50b is specifically configured to, when determining whether the image to be identified includes the target object: identifying at least one candidate object from the image to be identified according to the image characteristics of the target object; according to a homography matrix of image acquisition equipment corresponding to the image to be identified, converting pixel coordinates of at least one candidate object in the image to be identified into coordinates of at least one candidate object under a preset coordinate system; and determining whether the target object exists in the at least one candidate object according to the coordinates of the first position under the preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system.
Further, the processor 50b is specifically configured to, when determining whether the at least one candidate object includes the target object: calculating the distance between at least one candidate object and the first position according to the coordinate of the first position under the preset coordinate system and the coordinate of the at least one candidate object under the preset coordinate system; and if the distance between the at least one candidate object and the first position is less than or equal to the preset distance threshold value, determining that the target object exists in the at least one candidate object. Correspondingly, if the distances between the at least one candidate object and the first position are all larger than a preset distance threshold value, determining that the target object does not exist in the at least one candidate object.
Further, in the event that a target object is present in the at least one candidate object, the processor 50b is further configured to: from the at least one candidate object, the candidate object having the smallest distance to the first position is selected as the target object.
In other embodiments, the processor 50b is specifically configured to, when determining the historical movement track of the target object: according to homography matrix of image acquisition equipment corresponding to each of the plurality of historical images, converting pixel coordinates of a target object in the plurality of historical images into coordinates of each historical track point passed by the target object under a preset coordinate system, and taking the coordinates as positions of each historical track point; taking the time stamps of the historical images as the time stamps of the target object passing through each historical track point; and generating a historical movement track of the target object according to the time stamp of the target object passing through each historical track point and the position of each historical track point.
Further, the processor 50b is configured to, prior to determining whether the target object is included in the image to be identified: acquiring GIS track information of a target object in a historical time period; and correcting the historical movement track according to the GIS track information.
The processor 50b is specifically configured to, when acquiring GIS track information of the target object in the historical time period, at least one of the following operations: acquiring GIS track information of a target object in a history period from ADS-B equipment on the target object; GIS track information of a target object in a historical time period is acquired from scene monitoring radars deployed in a designated area.
Further, if the target image capturing device is the first image capturing device after the reference image capturing device disposed at the entrance of the designated area, the processor 50b is specifically configured to, when acquiring a plurality of history images captured before the image to be recognized: calculating the time of the target object entering the designated area entrance according to the kinematic parameters of the target object before entering the designated area entrance, which are sent by the ADS-B equipment on the target object; and acquiring a plurality of historical images from the images acquired by the reference image acquisition device according to the time when the target object enters the designated area entrance.
In still other embodiments, the processor 50b is further configured to: the identification information of the target object is used as an external tag to be added to an area where the target object is located in the image which is acquired by each image acquisition device and contains the target object in the appointed area; and generating a video abstract of the target object in the appointed area according to the plurality of historical images with the plug-in labels and the images to be identified.
In the embodiment of the present application, the designated area may be an airport, and the target object is a target aircraft.
Further, the processor 50b is configured to: and acquiring the moving tracks of other airplanes in the current airport, planning the navigation path of the target airplane according to the moving tracks of the other airplanes in the current airport, and guiding the target airplane to move along the planned navigation path.
In some alternative embodiments, as shown in fig. 5, the computer device may further include: a communication component 50c, a power supply component 50d. In some embodiments, the computer device is a computer, workstation, or the like, and the computer device may further include optional components such as a display 50e and an audio component 50 f. The illustration of only a few components in fig. 5 is not intended to imply that a computer device must contain all of the components shown in fig. 5 nor that a computer device can only contain the components shown in fig. 5.
The computer equipment provided by the embodiment can determine the historical movement track of the target aircraft according to a plurality of historical images which are acquired before the image to be identified and contain the target aircraft; and determining whether the image to be identified contains the target aircraft according to the historical movement track of the target aircraft, so that the image containing the target aircraft can be screened out, the identification of the target aircraft is realized, and further, the follow-up verification of the condition of the target aircraft is facilitated. For example, the travel condition of the target aircraft in the airport may be verified based on images acquired by a plurality of cameras in the airport, including the target aircraft, and so on.
On the other hand, if the plurality of history images are collected by other cameras arranged in front of the target camera, or the plurality of history images are collected by both the target camera and the other cameras arranged in front of the target camera, the cross-border recognition and tracking of the target aircraft can be realized by using the image processing method provided by the embodiment of the application.
In addition, the image acquisition equipment is an existing facility in the appointed area, and the image processing mode provided by the embodiment of the application does not need to additionally arrange the image acquisition equipment, namely does not need to additionally input the image acquisition cost.
Fig. 6 is a schematic structural diagram of another computer device according to an embodiment of the present application. As shown in fig. 6, the computer device includes: a memory 60a and a processor 60b. Wherein the memory 60a is for storing a computer program.
The processor 60b is coupled to the memory 60a for executing a computer program for: acquiring an image to be identified, wherein the image to be identified is acquired by a first image acquisition device arranged in a designated area; acquiring a plurality of historical images containing a target object, wherein the historical images are acquired by at least one second image acquisition device when the target object moves in the appointed area; the second image acquisition device is arranged on the appointed area and is positioned in front of the first image acquisition device; according to the plurality of historical images, determining a historical movement track of the target object; and determining whether the image to be identified contains the target object according to the historical movement track.
Optionally, the at least one second image acquisition device is an image acquisition device located before and adjacent to the first image acquisition device.
Further, if the second image capturing device is a reference image capturing device disposed at the entrance of the specified area, the processor 60b is specifically configured to, when acquiring a plurality of history images including the target object captured by at least one second image capturing device when the target object moves within the specified area: calculating the time of the target object entering the designated area entrance according to the kinematic parameters of the target object before entering the designated area entrance, which are sent by the ADS-B equipment on the target object; and acquiring the target images from the images acquired by the second image acquisition equipment according to the time when the target object enters the designated area entrance.
In some embodiments, the processor 60b is further configured to: the identification information of the target object is used as an external tag to be added to an area where the target object is located in an image which is acquired by each image acquisition device in the designated area and contains the target object; and generating a video abstract of the target object in the appointed area according to the image with the plug-in tag.
It should be noted that, the specific implementation manner of determining, by the processor 60b, the historical movement track of the target object in the historical time period according to the plurality of historical images and determining whether the image to be identified includes the target object according to the historical movement track may refer to the related content of the above embodiment, which is not described herein.
In some alternative embodiments, as shown in fig. 6, the computer device may further include: a communication component 60c, a power supply component 60d. In some embodiments, the computer device is a computer, workstation, or the like, and the computer device may further include optional components such as a display 60e and an audio component 60 f. The illustration of only a few components in fig. 6 is not intended to imply that a computer device must contain all of the components shown in fig. 6 nor that a computer device can only contain the components shown in fig. 6.
The computer equipment provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images containing the target object provided by other image acquisition equipment arranged in front of the image acquisition equipment corresponding to the image to be identified; and determining whether the image to be identified contains the target object according to the historical movement track of the target object, so that the image containing the target object can be screened out, cross-border identification and tracking of the target object are realized, and further, the condition of the target object is facilitated to be checked later. For example, the traveling condition of the target object in the airport may be verified based on images containing the target object acquired by a plurality of image acquisition devices in the airport, and so on.
Fig. 7 is a schematic structural diagram of still another computer device according to an embodiment of the present application. As shown in fig. 5, the computer device includes: a memory 70a and a processor 70b. Wherein the memory 70a is for storing a computer program.
The processor 70b is coupled to the memory 70a for executing a computer program for: acquiring an image to be identified acquired by target image acquisition equipment in a designated area; acquiring a historical movement track of a target object before an image to be identified; determining whether the target object is contained in the image to be identified according to the historical movement track; and generating a video abstract of the target object in the appointed area according to the image to be identified and other images containing the target object under the condition that the image to be identified contains the target object.
Optionally, when the processor 70b generates the video summary of the target object in the specified area, it is specifically configured to: adding identification information of a target image serving as an external tag into the target image, wherein the target image is an image containing a target object acquired by each image acquisition device in a designated area; and generating a video abstract of the target object in the appointed area according to the target image with the plug-in tag.
Alternatively, the processor 70b may add an add-on tag to the region of the target image where the target object is located.
In some alternative embodiments, as shown in fig. 7, the computer device may further include: a communication component 70c, a power supply component 70d. In some embodiments, the computer device is a computer, workstation, or the like, and the computer device may further include optional components such as a display 70e and an audio component 70 f. Only a part of the components are schematically shown in fig. 7, which does not mean that the computer device must contain all the components shown in fig. 7, nor that the computer device can only contain the components shown in fig. 7.
The computer equipment provided by the embodiment can determine the historical movement track of the target object according to a plurality of historical images containing the target object provided by other image acquisition equipment arranged in front of the image acquisition equipment corresponding to the image to be identified; and determining whether the image to be identified contains the target object according to the historical movement track of the target object, so that the image containing the target object can be screened out, and a video abstract of the target object can be generated according to the image containing the target object, thereby realizing cross-border identification and tracking of the target object, and further carrying out subsequent verification on the condition of the target object by utilizing the video abstract of the target object. For example, the traveling condition of the target object in the airport may be verified based on images containing the target object acquired by a plurality of image acquisition devices in the airport, and so on.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store various other data to support operations on the computer device. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In embodiments of the present application, the communication component is configured to facilitate communication between the computer device and other devices in a wired or wireless manner. The computer device may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G,5G or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, or other technologies.
In embodiments of the present application, the display may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power supply component is configured to provide power to various components of a computer device. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for intelligent mirrors with language interaction functionality, voice interaction with a user, etc. may be accomplished through an audio component.
It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (25)

1. An image processing method, comprising:
acquiring an image to be identified acquired by target image acquisition equipment in a designated area;
acquiring a plurality of historical images acquired before the image to be identified; the plurality of historical images are images containing the target object, wherein the images are acquired in the process that the target object moves in the appointed area;
according to the plurality of historical images, determining a historical movement track of the target object in a historical time period;
determining whether the image to be identified contains the target object according to the historical movement track;
the identification information of the target object is used as an external tag to be added to an area where the target object is located in an image which is acquired by each image acquisition device and contains the target object in the appointed area;
and generating a video abstract of the target object in the appointed area according to the image with the plug-in tag.
2. The method according to claim 1, wherein determining whether the target object is included in the image to be identified according to the historical movement track includes:
predicting a first position to which the target object moves when the image to be identified is acquired according to the historical movement track;
and determining whether the image to be identified contains the target object or not according to the image characteristics of the target object and the first position.
3. The method of claim 2, wherein predicting, based on the historical movement trajectory, a first location to which the target object moved when the image to be identified was acquired comprises:
acquiring a time stamp of the target object passing through each historical track point and the position of each historical track point from the historical movement track;
calculating the kinematic parameters of the target object in the historical time period according to the time stamp of the target object passing through each historical track point and the position of each historical track point;
and predicting a first position to which the target object moves when the image to be identified is acquired according to the kinematic parameters of the target object in the historical time period and the position of at least one historical track point in each historical track point.
4. The method according to claim 2, wherein determining whether the target object is included in the image to be identified based on the image feature of the target object and the first location comprises:
identifying at least one candidate object from the image to be identified according to the image characteristics of the target object;
according to a homography matrix of the image acquisition equipment corresponding to the image to be identified, converting pixel coordinates of the at least one candidate object in the image to be identified into coordinates of the at least one candidate object under a preset coordinate system;
and determining whether the target object exists in the at least one candidate object according to the coordinates of the first position under the preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system.
5. The method of claim 4, wherein determining whether the target object is included in the at least one candidate object based on the coordinates of the first location in the preset coordinate system and the coordinates of the at least one candidate object in the preset coordinate system comprises:
calculating the distance between the at least one candidate object and the first position according to the coordinates of the first position under a preset coordinate system and the coordinates of the at least one candidate object under the preset coordinate system;
If a distance smaller than or equal to a preset distance threshold exists in the distance between the at least one candidate object and the first position, determining that the target object exists in the at least one candidate object;
and if the distances between the at least one candidate object and the first position are all larger than a preset distance threshold value, determining that the target object is not present in the at least one candidate object.
6. The method of claim 5, wherein in the event that the target object is present in the at least one candidate object, the method further comprises:
and selecting a candidate object with the smallest distance from the first position from the at least one candidate object as the target object.
7. A method according to claim 3, wherein said determining a historical movement trajectory of said target object from said plurality of historical images comprises:
according to homography matrixes of image acquisition equipment corresponding to the historical images, converting pixel coordinates of the target object in the historical images into coordinates of each historical track point passed by the target object under a preset coordinate system, and taking the coordinates as positions of each historical track point;
Taking the time stamps of the historical images as the time stamps of the target object passing through each historical track point;
and generating a historical movement track of the target object according to the time stamp of the target object passing through each historical track point and the position of each historical track point.
8. The method of claim 7, wherein prior to determining whether the target object is contained in the image to be identified based on the historical movement trajectory, the method further comprises:
acquiring GIS track information of the target object in the historical time period;
and correcting the historical movement track according to the GIS track information.
9. The method of claim 8, wherein the obtaining GIS track information of the target object over the historical time period comprises at least one of:
acquiring GIS track information of the target object in the history period from ADS-B equipment on the target object;
and acquiring GIS track information of the target object in the historical time period from the scene monitoring radar deployed in the designated area.
10. The method according to claim 9, wherein if the target image capturing device is a first image capturing device disposed after a reference image capturing device at an entrance of the designated area, the capturing a plurality of history images captured before an image to be identified includes:
Calculating the time of the target object entering the designated area entrance according to the kinematic parameters of the target object before entering the designated area entrance, which are sent by the ADS-B equipment on the target object;
and acquiring the historical images from the images acquired by the reference image acquisition equipment according to the time when the target object enters the designated area entrance.
11. The method according to any one of claims 1 to 10, wherein the plurality of history images are M history images whose acquisition time is closest to the acquisition time of the image to be identified; wherein M is more than or equal to 2 and is an integer.
12. The method of claim 11, wherein the plurality of historical images are acquired by at least one other image acquisition device disposed within the designated area and located in front of the target image acquisition device.
13. The method of any one of claims 1-10, wherein the designated area is an airport; the target object is a target aircraft.
14. The method as recited in claim 13, further comprising:
acquiring the moving track of other airplanes in the airport at present;
And planning a navigation path of the target aircraft according to the moving tracks of other aircraft in the current airport, and guiding the target aircraft to move along the navigation path.
15. An image processing method, comprising:
acquiring an image to be identified, wherein the image to be identified is acquired by a first image acquisition device arranged in a designated area;
acquiring a plurality of historical images containing a target object, wherein the historical images are acquired by at least one second image acquisition device when the target object moves in the appointed area; the second image acquisition device is arranged on the appointed area and is positioned in front of the first image acquisition device;
according to the plurality of historical images, determining a historical movement track of the target object;
determining whether the image to be identified contains the target object according to the historical movement track;
the identification information of the target object is used as an external tag to be added to an area where the target object is located in an image which is acquired by each image acquisition device and contains the target object in the appointed area;
and generating a video abstract of the target object in the appointed area according to the image with the plug-in tag.
16. The method of claim 15, wherein the at least one second image acquisition device is an image acquisition device located before and adjacent to the first image acquisition device.
17. The method of claim 16, wherein if the second image capturing device is a reference image capturing device disposed at an entrance of the designated area, the capturing a plurality of historical images of the target object captured by the at least one second image capturing device while the target object is moving within the designated area comprises:
calculating the time of the target object entering the designated area entrance according to the kinematic parameters of the target object before entering the designated area entrance, which are sent by the ADS-B equipment on the target object;
and acquiring the target images from the images acquired by the second image acquisition equipment according to the time when the target object enters the designated area entrance.
18. An image processing method, comprising:
acquiring an image to be identified acquired by target image acquisition equipment in a designated area;
acquiring a historical movement track of a target object in front of the image to be identified;
Determining whether the image to be identified contains the target object according to the historical movement track;
when the image to be identified contains the target object, the identification information of the target image is used as an external tag to be added to the area of the target object in the image to be identified;
generating a video abstract of the target object in the appointed area according to the image to be identified with the plug-in tag and other images containing the target object with the plug-in tag; the external tag is located in an area where the target object is located in the other images containing the target object.
19. A monitoring system, comprising: the server device and the plurality of image acquisition devices are arranged in the designated area;
the plurality of image acquisition devices are used for acquiring images in a specified area, and the images comprise moving objects appearing in the specified area;
the server device is configured to: acquiring images to be identified acquired by target image acquisition equipment in a plurality of image acquisition equipment; acquiring a plurality of historical images containing the target object, which are acquired before the image to be identified, from the images acquired by the plurality of image acquisition devices; according to the plurality of historical images, determining a historical movement track of the target object in a historical time period; determining whether the image to be identified contains the target object according to the historical movement track;
The server device is further configured to add identification information of the target object as an add-on tag to an area where the target object is located in an image including the target object, where the image is collected by each image collection device in the designated area; and generating a video abstract of the target object in the appointed area according to the image with the plug-in tag.
20. An airport monitoring system, comprising: the server equipment and the cameras are arranged in the airport;
the cameras are used for collecting images of all airplanes in the airport;
the server device is configured to: acquiring a plurality of historical images containing a target plane before an image to be identified acquired by a target camera; according to the plurality of historical images, determining a historical movement track of the target aircraft in a historical time period; determining whether the image to be identified contains the target airplane or not according to the historical movement track; the target camera is any camera except the reference camera in the plurality of cameras;
the server device is further configured to add identification information of the target aircraft as an external tag to an area where the target aircraft is located in an image including the target aircraft, where the image is acquired by the plurality of cameras; and generating a video abstract of the target aircraft according to the image with the plug-in tag.
21. The system of claim 20, wherein the plurality of historical images are acquired for other cameras disposed in front of the target camera.
22. The system of claim 20, wherein the server device, prior to determining whether the image captured by the target camera includes the target aircraft, is further configured to:
determining a historical time period corresponding to the historical movement track; acquiring GIS track information of the target aircraft in the historical time period; and correcting the historical movement track according to the GIS track information.
23. The system of claim 22, wherein the system further comprises: an ADS-B device on the target aircraft and/or a scene surveillance radar disposed within the airport;
the ADS-B equipment on the target aircraft and/or the scene monitoring radar arranged in the airport are used for: transmitting the detected GIS track information of the target aircraft to the server-side equipment;
the server device is specifically configured to perform at least one of the following operations when acquiring GIS track information of the target object in the historical time period:
Acquiring GIS track information of the target object in the historical time period from the ADS-B equipment;
and acquiring GIS track information of the target object in the historical time period from the scene monitoring radar.
24. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 1-18.
25. A computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the method of any of claims 1-18.
CN201911073091.0A 2019-11-05 2019-11-05 Image processing method, device, system and storage medium Active CN111079525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911073091.0A CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911073091.0A CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN111079525A CN111079525A (en) 2020-04-28
CN111079525B true CN111079525B (en) 2023-05-30

Family

ID=70310696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911073091.0A Active CN111079525B (en) 2019-11-05 2019-11-05 Image processing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111079525B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612675B (en) * 2020-05-18 2023-08-04 浙江宇视科技有限公司 Method, device, equipment and storage medium for determining peer objects
CN113282782B (en) * 2021-05-21 2022-09-09 三亚海兰寰宇海洋信息科技有限公司 Track acquisition method and device based on multi-point phase camera array
CN113095447B (en) * 2021-06-10 2021-09-07 深圳联合安防科技有限公司 Detection method and system based on image recognition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043964A (en) * 2010-12-30 2011-05-04 复旦大学 Tracking algorithm and tracking system for taking-off and landing of aircraft based on tripod head and camera head
CN103714553A (en) * 2012-10-09 2014-04-09 杭州海康威视数字技术股份有限公司 Multi-target tracking method and apparatus
CN103927508A (en) * 2013-01-11 2014-07-16 浙江大华技术股份有限公司 Target vehicle tracking method and device
CN104424648A (en) * 2013-08-20 2015-03-18 株式会社理光 Object tracking method and device
CN105975633A (en) * 2016-06-21 2016-09-28 北京小米移动软件有限公司 Motion track obtaining method and device
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107045805A (en) * 2017-03-07 2017-08-15 安徽工程大学 A kind of monitoring method and system for small-sized aerial craft and thing drifted by wind
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN107967298A (en) * 2017-11-03 2018-04-27 深圳辉锐天眼科技有限公司 Method for managing and monitoring based on video analysis
CN109087335A (en) * 2018-07-16 2018-12-25 腾讯科技(深圳)有限公司 A kind of face tracking method, device and storage medium
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043964A (en) * 2010-12-30 2011-05-04 复旦大学 Tracking algorithm and tracking system for taking-off and landing of aircraft based on tripod head and camera head
CN103714553A (en) * 2012-10-09 2014-04-09 杭州海康威视数字技术股份有限公司 Multi-target tracking method and apparatus
CN103927508A (en) * 2013-01-11 2014-07-16 浙江大华技术股份有限公司 Target vehicle tracking method and device
CN104424648A (en) * 2013-08-20 2015-03-18 株式会社理光 Object tracking method and device
CN105975633A (en) * 2016-06-21 2016-09-28 北京小米移动软件有限公司 Motion track obtaining method and device
CN107045805A (en) * 2017-03-07 2017-08-15 安徽工程大学 A kind of monitoring method and system for small-sized aerial craft and thing drifted by wind
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN107967298A (en) * 2017-11-03 2018-04-27 深圳辉锐天眼科技有限公司 Method for managing and monitoring based on video analysis
CN109087335A (en) * 2018-07-16 2018-12-25 腾讯科技(深圳)有限公司 A kind of face tracking method, device and storage medium
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵士瑄等.《低空小飞行物视频检测与追踪关键技术》.《计算机应用》.2019,第第39卷卷(第第S1期期),第2.3节,第1节第3段,图1. *

Also Published As

Publication number Publication date
CN111079525A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111079525B (en) Image processing method, device, system and storage medium
US11131990B1 (en) Method for transferring control to an operator
CN109164809B (en) Autonomous following control system and method for vehicle formation
US20180127006A1 (en) Automated wayside asset monitoring with optical imaging and visualization
CN110888456A (en) Autonomous cooperative reconnaissance control method for unmanned aerial vehicle and unmanned vehicle
US10480953B2 (en) Semi-autonomous monitoring system
US9865166B2 (en) System and method for detecting a particular occupancy status of multiple parking positions of a parking facility
CN107209518A (en) Valet parking method and valet parking system
US20200034637A1 (en) Real-Time Track Asset Recognition and Position Determination
US20180033298A1 (en) Facilitating location positioning service through a uav network
CN110687928A (en) Landing control method, system, unmanned aerial vehicle and storage medium
CN113866758B (en) Scene monitoring method, system, device and readable storage medium
JP2019114110A (en) Information collection system and server device
WO2019186591A1 (en) Method and system for automating flow of operations on airports
US20210216948A1 (en) Autonomous vehicles performing inventory management
CN113286081B (en) Target identification method, device, equipment and medium for airport panoramic video
KR200487177Y1 (en) Logistic management system with drone
US20210158128A1 (en) Method and device for determining trajectories of mobile elements
CN112445204A (en) Object movement navigation method and device in construction site and computer equipment
CN111047231A (en) Inventory method and system, computer system and computer readable storage medium
KR20170098082A (en) Logistic management system with drone
WO2022143181A1 (en) Information processing method and apparatus, and information processing system
Vitiello et al. Assessing Performance of Radar and Visual Sensing Techniques for Ground-To-Air Surveillance in Advanced Air Mobility
CN111857187B (en) T-beam construction tracking system and method based on unmanned aerial vehicle
CN116635302A (en) Image marking system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant