CN114979497A - Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform - Google Patents

Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform Download PDF

Info

Publication number
CN114979497A
CN114979497A CN202210900821.5A CN202210900821A CN114979497A CN 114979497 A CN114979497 A CN 114979497A CN 202210900821 A CN202210900821 A CN 202210900821A CN 114979497 A CN114979497 A CN 114979497A
Authority
CN
China
Prior art keywords
target object
pole
picture
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210900821.5A
Other languages
Chinese (zh)
Other versions
CN114979497B (en
Inventor
杨翰翔
付正武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lianhe Intelligent Technology Co ltd
Original Assignee
Shenzhen Lianhe Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lianhe Intelligent Technology Co ltd filed Critical Shenzhen Lianhe Intelligent Technology Co ltd
Priority to CN202210900821.5A priority Critical patent/CN114979497B/en
Publication of CN114979497A publication Critical patent/CN114979497A/en
Application granted granted Critical
Publication of CN114979497B publication Critical patent/CN114979497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The application provides a pole-mounted unmanned aerial vehicle linkage tracking method, a pole-mounted unmanned aerial vehicle linkage tracking system and a cloud platform, wherein firstly, an area monitoring picture of a monitoring area where a smart lamp pole is located is obtained according to a received image picture of the periphery of the smart lamp pole and an image shooting angle of a camera device; then, analyzing the target object in the area monitoring picture to obtain motion information; and finally, when the target object leaves the monitoring area where the intelligent lamp pole is located, generating control parameters according to the direction and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole is located so as to control the pole-mounted unmanned aerial vehicle to track the target object. So, can break away from camera equipment's on the wisdom lamp pole when the target object monitored area, carry unmanned aerial vehicle to the target object through the pole of configuration on the wisdom lamp pole and carry out continuous tracking to avoid the wisdom lamp pole because of camera equipment's the limited problem that leads to the target object to lose easily of monitored area, expand the monitored area scope of wisdom lamp pole, promote the monitoring ability of wisdom lamp pole.

Description

Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform
Technical Field
The application relates to the technical field of video monitoring, in particular to a linkage tracking method and system of an unmanned aerial vehicle based on pole loading and a cloud platform.
Background
Wisdom lamp pole carries on perception equipment such as 5G basic station, video monitoring, environmental monitoring, information issuing, fill electric pile through the intensification, as the information acquisition terminal of front end, provides various city big data support for the thing networking platform of rear end, can regard as the nervous system that wisdom city was built. However, because the position that sets up of wisdom lamp pole generally has the limitation (for example, set up along road both sides), when carrying out video monitoring, can't monitor the region outside setting up the position, this can cause the monitoring blind area, is unfavorable for target object's tracking. How to solve the technical problem that the monitoring capability of the intelligent lamp pole is insufficient is a consideration for the technical personnel in the field.
Disclosure of Invention
In order to overcome at least the defects in the prior art, the application aims to provide a pole-mounted unmanned aerial vehicle linkage tracking method, a pole-mounted unmanned aerial vehicle linkage tracking system and a cloud platform, and firstly, image pictures around a smart lamp pole and image shooting angles of camera equipment, which are shot by the camera equipment in real time, are obtained; then, obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment; then, when the target object exists in the detection area monitoring picture, analyzing the target object in the area monitoring picture to obtain motion information; and finally, judging whether the target object leaves the monitoring area where the intelligent lamp post is located according to the motion information of the target object, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located when the target object leaves the monitoring area where the intelligent lamp post is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located so as to control the pole-mounted unmanned aerial vehicle to track the target object. So, can break away from camera equipment's on the wisdom lamp pole when the target object monitored area, carry unmanned aerial vehicle to the target object through the pole of configuration on the wisdom lamp pole and carry out continuous tracking to avoid the wisdom lamp pole because of camera equipment's the limited problem that leads to the target object to lose easily of monitored area, expand the monitored area scope of wisdom lamp pole, promote the monitoring ability of wisdom lamp pole.
In a first aspect, the application provides a pole-mounted unmanned aerial vehicle linkage tracking method, which is applied to a cloud platform communicating with a camera device in a smart lamp pole and a pole-mounted unmanned aerial vehicle, and comprises the following steps:
acquiring an image picture around the intelligent lamp pole shot by the camera equipment in real time and an image shooting angle of the camera equipment;
obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment;
detecting whether the area monitoring picture comprises a preset target object or not, and if the area monitoring picture is detected to have the target object, analyzing the target object in the area monitoring picture to obtain motion information, wherein the motion information comprises a motion track and a motion speed;
and judging whether the target object leaves the monitoring area where the intelligent lamp post is located or not according to the motion information of the target object, if so, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located so as to control the pole-mounted unmanned aerial vehicle to track the target object.
In a possible implementation manner, before the step of obtaining the image frame around the smart light pole and the image shooting angle of the image shooting device, the method further includes a step of adjusting shooting parameters of the image shooting device, where the step includes:
adjusting shooting parameters of the camera shooting equipment, wherein the shooting parameters comprise a pitch angle between the camera shooting equipment and the ground and a height between the camera shooting equipment and the ground;
obtaining a shooting visual range corresponding to the camera equipment according to the pitch angle and the height between the camera equipment and the ground;
controlling the camera shooting equipment to shoot image frames around the intelligent lamp post and image shooting angles of the camera shooting equipment, and receiving the image frames and the corresponding image shooting angles sent by the camera shooting equipment;
detecting whether an area monitoring picture of a peripheral monitoring area of the intelligent lamp pole can be obtained or not according to the image picture and the corresponding image shooting angle;
if the area monitoring picture of the intelligent lamp pole peripheral monitoring area cannot be obtained, readjusting the shooting parameters of the camera equipment, and if the area monitoring picture of the intelligent lamp pole peripheral monitoring area can be obtained, detecting whether a verification object close to the edge of the area monitoring picture exists in the area monitoring picture, wherein the verification object is an object which is arranged in the intelligent lamp pole peripheral monitoring area in advance;
if the fact that the verification object is located at the edge of the area monitoring picture in the area monitoring picture is detected, the verification object is identified, and if the set object feature in the verification object cannot be identified, the shooting parameters of the camera shooting equipment are adjusted again; and if the set object characteristics of the verification object can be identified, finishing the adjustment of the shooting parameters of the camera equipment, wherein the verification object and the set object characteristics are configured in the cloud platform in advance.
In a possible implementation manner, the step of analyzing the target object in the area monitoring picture to obtain motion information includes:
acquiring a previous monitoring picture and a subsequent monitoring picture which have different shooting times in the area monitoring picture;
detecting whether an object detection frame exists in the previous monitoring picture and the subsequent monitoring picture;
if the object detection frames exist in the prior monitoring picture and the subsequent monitoring picture, determining a first object detection frame in the prior monitoring picture and a second object detection frame in the subsequent monitoring picture;
similarity matching is carried out on the target object in the first object detection frame and the target object in the second object detection frame;
and if the similarity matching is successful, obtaining motion information according to the position of a first object detection frame in the previous monitoring picture, the position of a second object detection frame in the subsequent monitoring picture and the shooting time of the previous monitoring picture and the subsequent monitoring picture.
In a possible implementation manner, the step of determining the first object detection box in the previous monitoring picture and the second object detection box in the subsequent monitoring picture includes:
inputting the previous monitoring picture and the subsequent monitoring picture into a similar object track following model, wherein the similar object track following model comprises an object position determining sub-model and an object track following sub-model;
determining a first object detection frame in the previous monitoring picture through the object position determining submodel, wherein the object position determining submodel is trained by utilizing a sample pair to obtain corresponding submodel parameters, and the sample pair comprises a sample picture and characteristic information of the object detection frame in the sample picture;
determining a second object detection frame in the subsequent monitoring picture through the object track following sub-model, wherein the object track following sub-model adopts sub-model parameters which are the same as those of the object position determining sub-model;
similarity matching is carried out on the target object in the first object detection frame and the target object in the second object detection frame, and the similarity matching comprises the following steps:
determining a previous target object information parameter of the first object detection frame by using a target object identification layer in the similar object track following model;
determining subsequent target object information parameters of the second object detection frame by utilizing a following object identification layer in the similar object track following model;
the similar object track following model determines a difference parameter between the target object in the first object detection frame and the target object in the second object detection frame according to the prior target object information parameter and the subsequent target object information parameter;
and comparing the difference parameter with a preset difference parameter threshold, if the difference parameter is smaller than the preset difference parameter threshold, determining that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful, otherwise, determining that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful.
In one possible implementation, the determining, by the object position determination submodel, the first object detection frame in the previous monitoring picture includes:
determining a sub-model by utilizing the object position, and performing previous undersampling processing on the previous monitoring picture for a set number of times to obtain a previous picture sample for the set number of times; the under-sampling parameters of the prior under-sampling processing of the set times are different;
for each previous picture sample, the object position determination submodel determining feature information of a first object detection box in the previous picture sample;
the determining a second object detection frame in the subsequent monitoring picture through the object track following submodel includes:
performing subsequent undersampling processing on the subsequent monitoring picture for a set number of times by using the object track following sub-model to obtain a subsequent picture sample for the set number of times; the under-sampling parameters of the subsequent under-sampling processing correspond to the under-sampling parameters of the prior under-sampling processing one to one;
for each subsequent picture sample, the object track following sub-model determines feature information of a second object detection frame in the subsequent picture sample;
determining difference parameters of a target object in the first object detection frame in the previous picture sample and a target object in the second object detection frame in the subsequent picture sample under the condition that the same under-sampling parameters are respectively determined according to the feature information of the first object detection frame in the previous picture sample and the feature information of the second object detection frame in the subsequent picture sample;
and calculating a difference parameter between the target object in the first object detection frame and the target object in a second object detection frame of a subsequent picture sample according to the difference parameter values corresponding to different undersampling parameters.
In a possible implementation manner, before the step of obtaining a previous monitoring picture and a subsequent monitoring picture with different shooting times in the area monitoring picture, the method further includes a step of training to obtain the similar object trajectory following model, where the step includes:
training an object position determination submodel in the initial similar object track following model to obtain submodel parameters of the corresponding object position determination submodel;
acquiring a training sample pair, wherein the training sample pair comprises a first sample pair and a second sample pair, the first sample pair comprises a first training sample image which is the same as a target object and the labeling similarity of the first training sample image, and the second sample pair comprises a second training sample image which is different from the target object and the labeling similarity of the second training sample image;
inputting the training sample pair into a preliminarily trained similar object track following model, and calculating a loss function value of the preliminarily trained similar object track following model according to the output similarity and the labeling similarity, wherein the preliminarily trained similar object track following model comprises the determined sub-model parameters of the object position determination sub-model;
and when the loss function value is smaller than a preset threshold value, determining the corresponding parameter as the parameter corresponding to the similar object track following model to obtain the similar object track following model.
In a possible implementation manner, the step of obtaining a direction and a movement speed of the target object leaving a monitoring area where the smart lamp post is located, and generating a control parameter according to the direction and the movement speed of the target object leaving the monitoring area where the smart lamp post is located to control the pole-mounted unmanned aerial vehicle to track the target object includes:
according to the coordinate system determined by the intelligent lamp pole and the azimuth information, the direction of the target object leaving the monitoring area where the intelligent lamp pole is located is obtained, and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole is located is determined and obtained according to the movement information of the target object;
calculating control parameters for controlling the pole-mounted unmanned aerial vehicle to track the target object according to the direction and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole is located and the current position and the movement state of the pole-mounted unmanned aerial vehicle in the coordinate system, wherein the control parameters comprise a parameter for controlling the lifting height of the pole-mounted unmanned aerial vehicle, a parameter for controlling the movement direction of the pole-mounted unmanned aerial vehicle and a parameter for controlling the movement speed of the pole-mounted unmanned aerial vehicle;
the parameter used for controlling the movement direction of the pole-mounted unmanned aerial vehicle is determined according to the direction of the target object leaving the monitoring area where the intelligent lamp pole is located and the current position of the pole-mounted unmanned aerial vehicle in the coordinate system through calculation, the parameter used for controlling the lifting height of the pole-mounted unmanned aerial vehicle is determined according to the environmental parameter of the position where the intelligent lamp pole is located and the size of the target object through calculation, and the parameter used for determining the movement speed of the pole-mounted unmanned aerial vehicle is determined according to the speed of the target object leaving the intelligent lamp pole and the parameter of the lifting height of the pole-mounted unmanned aerial vehicle;
and sending the control parameters to the rod-mounted unmanned aerial vehicle to control the rod-mounted unmanned aerial vehicle to track the target object.
In one possible implementation, after the step of sending the control parameters to the on-pole drone to control the on-pole drone to track the target object, the method further includes:
acquiring monitoring video information fed back by the pole-mounted unmanned aerial vehicle;
carrying out object identification analysis on the monitoring video information to obtain object characteristic information, and comparing the object characteristic information with the object characteristic information of the target object, wherein the object characteristic information of the target object is identified from an image picture shot by the camera equipment by the cloud platform;
when the object characteristic information is consistent with the object characteristic information of the target object, performing object analysis on the target object in the monitoring video information to obtain the current motion information of the target object;
and generating a control parameter of the pole-mounted unmanned aerial vehicle according to the current motion information, and sending the control parameter of the pole-mounted unmanned aerial vehicle to the pole-mounted unmanned aerial vehicle so that the pole-mounted unmanned aerial vehicle tracks the target object.
The second aspect, an unmanned aerial vehicle linkage tracker based on pole year, the system be applied to with the wisdom lamp pole in camera equipment and the pole carry the cloud platform of unmanned aerial vehicle communication, the system includes:
the acquisition module is used for acquiring the image frames around the intelligent lamp post shot by the camera equipment in real time and the image shooting angle of the camera equipment;
the determining module is used for obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment;
the detection and analysis module is used for detecting whether the area monitoring picture comprises a preset target object or not, and if the area monitoring picture is detected to have the target object, analyzing the target object in the area monitoring picture to obtain motion information, wherein the motion information comprises a motion track and a motion speed;
the control module is used for judging whether the target object leaves the monitoring area where the intelligent lamp pole is located or not according to the motion information of the target object, if so, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole is located so as to control the pole-mounted unmanned aerial vehicle to track the target object.
In a third aspect, an embodiment of the present application provides a cloud platform, where the cloud platform includes a processor, a computer-readable storage medium, and a communication unit, where the computer-readable storage medium, the communication unit, and the processor are connected through a bus interface, the communication unit is used for being in communication connection with an image pickup device and a pole-mounted drone, the computer-readable storage medium is used for storing a program, an instruction, or a code, and the processor is used for executing the program, the instruction, or the code in the computer-readable storage medium to execute the pole-mounted drone linkage tracking method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed, the computer is caused to execute a pole-based unmanned aerial vehicle linkage tracking method in the first aspect or any one of the possible implementation manners of the first aspect.
Based on any one of the above aspects, firstly, acquiring an image picture around the intelligent lamp pole and an image shooting angle of the camera equipment, wherein the image picture is shot by the camera equipment in real time; then, obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment; then, when the target object exists in the monitoring picture of the detection area, analyzing the target object in the monitoring picture of the detection area to obtain motion information; and finally, judging whether the target object leaves the monitoring area where the intelligent lamp post is located according to the motion information of the target object, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located when the target object leaves the monitoring area where the intelligent lamp post is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located so as to control the pole-mounted unmanned aerial vehicle to track the target object. So, can break away from camera equipment's on the wisdom lamp pole when the target object monitored area, carry unmanned aerial vehicle to the target object through the pole of configuration on the wisdom lamp pole and carry out continuous tracking to avoid the wisdom lamp pole because of camera equipment's the limited problem that leads to the target object to lose easily of monitored area, expand the monitored area scope of wisdom lamp pole, promote the monitoring ability of wisdom lamp pole.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that need to be called in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic block diagram of an application scenario of a pole-mounted unmanned aerial vehicle linkage tracking method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a linkage tracking method for a pole-mounted unmanned aerial vehicle according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a sub-step of step S103 in FIG. 2;
fig. 4 is a functional module schematic diagram of a linkage tracking system of an unmanned aerial vehicle based on pole loading according to an embodiment of the present application;
fig. 5 is a schematic block diagram of structural components of a cloud platform for implementing the above linkage tracking of the unmanned aerial vehicle based on pole loading according to the embodiment of the present application.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is a block diagram illustrating an application scenario of a pole-mounted unmanned aerial vehicle linkage tracking method according to an embodiment of the present application. Can include wisdom lamp pole 10 and cloud platform 20 in this application scene, wherein, wisdom lamp pole 10 can include the camera equipment 11 that is used for shooing wisdom lamp pole 10 region all around and be located the pole on wisdom lamp pole 10 and carry unmanned aerial vehicle 12. The camera device 11 may be disposed on the pan/tilt head to adjust the shooting angle and shooting direction of the camera device 11 by adjusting the pan/tilt head. Can set up one on the wisdom lamp pole 10 and be used for the holding pole to carry unmanned aerial vehicle's 12 cabin, carry unmanned aerial vehicle 12 during operation at the needs pole, can make the pole carry unmanned aerial vehicle 12 and can take off through opening the cabin to corresponding activity of cruising. The cloud platform 20 may be a server or a cluster of servers located in the cloud.
In order to solve the technical problem in the foregoing background art, the unmanned aerial vehicle linkage tracking method based on pole loading provided by this embodiment is described in detail below with reference to the pole loading-based unmanned aerial vehicle linkage tracking system shown in fig. 1 and the flow schematic diagram of the pole loading-based unmanned aerial vehicle linkage tracking method provided by this embodiment shown in fig. 2.
In step S101, an image frame around the smart light pole 10 and an image capturing angle of the image capturing device 11 captured by the image capturing device 11 in real time are obtained.
The camera device 11 can obtain an image picture through rotation of the cradle head to obtain an image picture around the smart lamp pole 10, and the camera device 11 sends the obtained image picture and an image shooting angle when obtaining the image picture to the cloud platform 20.
Step S102, obtaining an area monitoring picture of the monitoring area where the smart lamp post 10 is located according to the received image picture of the periphery of the smart lamp post 10 and the image shooting angle of the camera device.
The cloud platform 20 may generate a panoramic image including the surroundings of the smart light pole 10 according to the image frames captured by the image capturing device 11 and the image capturing angles when capturing the image frames, and in this step, the area monitoring frame of the monitoring area where the smart light pole 10 is located may be a panoramic image. The update frequency of the panoramic image can be determined from the rotational frequency of the image pickup apparatus 11. For example, if the time that the image pickup apparatus 11 rotates one turn along with the pan/tilt head is 3 seconds, it can be determined that the image pickup apparatus 11 can update the area monitoring screen of the monitoring area at the speed of 3 seconds in the time period.
Step S103, detecting whether the area monitoring picture comprises a preset target object, and if the target object exists in the area monitoring picture, analyzing the target object in the area monitoring picture to obtain motion information.
In this step, the motion information may include a motion trajectory and a motion speed.
Specifically, in this step, the characteristic parameters of the target object may be configured in the cloud platform in advance, so as to perform image detection and identification on the area monitoring picture, determine whether the target object exists in the area monitoring picture, and obtain the motion information of the target object by analyzing the position change of the target object in the area monitoring picture when the target object exists.
Step S104, judging whether the target object leaves the monitoring area where the intelligent lamp pole 10 is located or not according to the motion information of the target object, if so, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located so as to control the pole-mounted unmanned aerial vehicle 12 to track the target object.
In an implementation manner of the embodiment of the present application, it may be determined whether the target object is away from the monitoring area where the smart lamp post 10 is located according to a distance between a position of the target object in the area monitoring screen and an edge of the area monitoring screen, and if the distance between the position of the target object in the area monitoring screen and the edge of the area monitoring screen is smaller than a preset distance threshold, it is determined that the target object is away from the monitoring area where the smart lamp post 10 is located. In an implementation manner of the embodiment of the present application, it may also be determined whether the target object leaves the monitoring area where the smart lamp post 10 is located according to a position change trend of the target object in the plurality of area monitoring pictures, and if the position change trend of the target object in the plurality of area monitoring pictures gradually moves to the edge of the area monitoring picture and the distance from the edge of the area monitoring picture is smaller than a preset distance threshold, it is determined that the target object leaves the monitoring area where the smart lamp post 10 is located, otherwise, it is determined that the target object does not leave the monitoring area where the smart lamp post 10 is located.
If the target object is judged to leave the monitoring area where the intelligent lamp post 10 is located, the direction and the movement speed of the target object leaving the monitoring area where the intelligent lamp post 10 is located can be calculated according to the position change of the target object in the monitoring pictures of the adjacent areas and the updating duration of the monitoring pictures of the adjacent areas.
And calculating control parameters for controlling the pole-mounted unmanned aerial vehicle 12 according to the direction and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located, so that the pole-mounted unmanned aerial vehicle 12 can track the target object.
According to the scheme, the image picture of the periphery of the intelligent lamp pole 10 shot by the camera equipment 11 in real time and the image shooting angle of the camera equipment 11 are obtained; obtaining an area monitoring picture of a monitoring area where the intelligent lamp post 10 is located according to the received image picture of the periphery of the intelligent lamp post 10 and the image shooting angle of the camera device 11; when the target object exists in the detection area monitoring picture, analyzing the target object in the area monitoring picture to obtain motion information; the motion information of the target object judges whether the target object leaves the monitoring area where the intelligent lamp pole 10 is located, and when the target object leaves the monitoring area where the intelligent lamp pole 10 is located, the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located are obtained, and control parameters are generated according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located to control the pole-mounted unmanned aerial vehicle 12 to track the target object. So, can break away from camera equipment 11's on the wisdom lamp pole 10 when the target object monitored area, carry unmanned aerial vehicle 12 to the pole through configuration on the wisdom lamp pole 10 and carry out continuous tracking to the target object to avoid wisdom lamp pole 10 because of camera equipment's the limited problem that leads to the target object to lose easily of monitored area, expand the monitoring area scope of wisdom lamp pole 10, promote the monitoring ability of wisdom lamp pole 10.
In this embodiment of the application, before step S101, the linkage tracking system for the unmanned aerial vehicle based on the pole load may further include a step of adjusting shooting parameters of the image pickup apparatus 11.
Specifically, the step of adjusting the shooting parameters of the image pickup apparatus 11 can be realized in the following manner.
First, shooting parameters of the image pickup apparatus 11 are adjusted, the shooting parameters including a pitch angle between the image pickup apparatus 11 and the ground and a height between the image pickup apparatus 11 and the ground.
Optionally, in this embodiment of the application, the pitch angle between the camera device 11 and the ground may be adjusted by the pan-tilt, and the height between the camera device 11 and the ground may be adjusted by lifting the pan-tilt by the lifting mechanism.
And then, obtaining a shooting visual distance corresponding to the camera 11 according to the pitch angle and the height between the camera 11 and the ground.
The calculation formula of the shooting visual distance S may be as follows:
S=h*tanθ
h is the height between the image pickup apparatus 11 and the ground, and θ is the pitch angle between the image pickup apparatus 11 and the ground.
Then, the camera device 11 is controlled to shoot the image picture around the intelligent lamp pole 10 and the image shooting angle of the camera device 11, and the image picture and the corresponding image shooting angle sent by the camera device 11 are received.
And then, detecting whether an area monitoring picture of the surrounding monitoring area of the intelligent lamp pole 10 can be obtained or not according to the image picture and the corresponding image shooting angle.
Then, if the area monitoring picture of the monitoring area around the intelligent lamp pole 10 cannot be obtained, readjusting the shooting parameters of the camera equipment, and if the area monitoring picture of the monitoring area around the intelligent lamp pole 10 can be obtained, detecting whether a verification object close to the edge of the area monitoring picture exists in the area monitoring picture, wherein the verification object is an object which is arranged in the monitoring area around the intelligent lamp pole 10 in advance;
finally, if it is detected that a verification object is located at the edge of the area monitoring picture in the area monitoring picture, identifying the verification object, and if the set object feature in the verification object cannot be identified, readjusting the shooting parameters of the camera device 11; if the setting object feature of the verification object can be identified, the adjustment of the shooting parameters of the image pickup apparatus 11 is completed, wherein the verification object and the setting object feature are configured in the cloud platform in advance.
Can ensure through the above-mentioned adjustment that camera equipment 11 can be under the enough big circumstances of assurance monitoring range, can discern the object in the monitoring range, can guarantee that camera equipment 11 has great monitoring area scope, avoid frequently calling pole year unmanned aerial vehicle 12 to trail because of camera equipment 11's monitoring area scope undersize, lead to the wasting of resources.
Referring to fig. 3, in the embodiment of the present application, step S103 may be implemented by the following sub-steps.
And a substep S1031 of obtaining a previous monitoring picture and a subsequent monitoring picture with different shooting times in the region monitoring picture.
In this sub-step, the previous monitoring picture and the subsequent monitoring picture with different shooting times may be monitoring pictures shot at adjacent times, or may not be monitoring pictures shot at adjacent times, and if the update cycle of the area monitoring picture is 3 seconds, the shooting time interval between the previous monitoring picture and the subsequent monitoring picture may be 3 seconds, or may be 3N seconds, where N is a positive integer greater than 1.
And a substep S1032 of detecting whether an object detection frame exists in the previous monitoring picture and the subsequent monitoring picture.
In this sub-step, whether a closed edge contour exists in the previous monitoring picture and the subsequent monitoring picture can be detected through an edge detection image algorithm (wherein the edge contour is a position where a gray-scale value of an image has a sudden change, and when a general target object appears in a monitoring area, the target object can be detected from the monitoring picture through the edge detection algorithm), and if the closed edge contour exists, it is determined that an object detection frame exists.
And a substep S1033 of determining a first object detection box in the previous monitoring picture and a second object detection box in the subsequent monitoring picture if both the previous monitoring picture and the subsequent monitoring picture have object detection boxes.
In the sub-step, determining the first object detection frame and the second object detection frame includes determining the position coordinates and the relative position coordinate relationship of each pixel point in them.
The sub-step S1034 matches the similarity between the target object in the first object detection box and the target object in the second object detection box.
And acquiring feature information of the target object in the first object detection frame and feature information of the target object in the second object detection frame, wherein the feature information comprises pixel point information of key parts, for example, when the target object is a person, the key parts can be five sense organs in a human face, and the pixel point information comprises position coordinates of the pixel points.
Optionally, in this sub-step S1034, a distance value between the target object in the first object detection frame and the target object in the second object detection frame may be calculated based on a euclidean distance formula, and similarity matching may be performed according to a difference between the distance value and a preset distance threshold.
In the sub-step S1035, if the similarity matching is successful, motion information is obtained according to the position of the first object detection frame in the previous monitoring picture, the position of the second object detection frame in the subsequent monitoring picture, and the shooting time of the previous monitoring picture and the subsequent monitoring picture.
In the embodiment of the present application, the sub-step S1033 may be implemented as follows.
Firstly, inputting the prior monitoring picture and the subsequent monitoring picture into a similar object track following model, wherein the similar object track following model comprises an object position determining sub-model and an object track following sub-model.
And then, determining a first object detection frame in the previous monitoring picture by the object position determining submodel, wherein the object position determining submodel is trained by utilizing a sample pair to obtain corresponding submodel parameters, and the sample pair comprises a sample picture and characteristic information of the object detection frame in the sample picture.
And finally. And determining a second object detection frame in the subsequent monitoring picture through the object track following sub-model, wherein the object track following sub-model adopts the same sub-model parameters as the object position determining sub-model.
Through the model identification based on artificial intelligence, the object detection frame can be identified from the monitoring picture, and the identification accuracy is increased.
In the embodiment of the present application, the sub-step S1034 may be implemented as follows.
Firstly, a target object identification layer in the similar object track following model is utilized to determine the prior target object information parameters of the first object detection frame.
Then, subsequent target object information parameters of the second object detection frame are determined by utilizing a following object identification layer in the similar object track following model.
Then, the similar object trajectory following model determines a difference parameter between the target object in the first object detection frame and the target object in the second object detection frame according to the previous target object information parameter and the subsequent target object information parameter.
Specifically, the difference parameter between the target objects may be determined by calculating a euclidean distance between the target object in the first object detection frame and the target object in the second object detection frame, where the difference parameter is positively correlated with the euclidean distance, i.e., the smaller the euclidean distance, the smaller the difference parameter.
And finally, comparing the difference parameter with a preset difference parameter threshold, if the difference parameter is smaller than the preset difference parameter threshold, judging that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful, otherwise, judging that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful.
In the embodiment of the present application, the step of determining the first object detection frame in the previous monitoring picture by the object position determination submodel may be implemented in the following manner.
Firstly, determining a sub-model by using the object position, and carrying out the prior undersampling processing on the prior monitoring picture for the set times to obtain a prior picture sample of the set times; the set number of times of the under-sampling parameters of the previous under-sampling process are different.
Then, for each previous picture sample, the object position determination submodel determines feature information of a first object detection box in the previous picture sample.
The step of determining the second object detection frame in the subsequent monitoring picture by the object trajectory following submodel may be implemented in the following manner.
Firstly, performing subsequent undersampling processing on the subsequent monitoring picture for a set number of times by using the object track following submodel to obtain a subsequent picture sample for the set number of times; the under-sampling parameters of the subsequent under-sampling processing correspond to the under-sampling parameters of the prior under-sampling processing one to one;
then, for each subsequent picture sample, the object track following sub-model determines feature information of a second object detection frame in the subsequent picture sample;
then, determining difference parameters of a target object in the first object detection frame in the previous picture sample and a target object in the second object detection frame in the subsequent picture sample under the condition that the same under-sampling parameters are respectively determined according to the feature information of the first object detection frame in the previous picture sample and the feature information of the second object detection frame in the subsequent picture sample;
and finally, calculating the difference parameter between the target object in the first object detection frame and the target object in the second object detection frame of the subsequent picture sample according to the difference parameter values corresponding to different under-sampling parameters.
Specifically, a weighted summation manner may be adopted to add difference parameter values corresponding to different under-sampling parameters to obtain a difference parameter between the target object in the first object detection frame and the target object in the second object detection frame of the subsequent picture sample.
In the above manner, a plurality of picture samples are obtained through undersampling, and the difference parameter between the target object in the first object detection frame and the target object in the second object detection frame of the subsequent picture sample is calculated by calculating the difference parameter between the target object in the previous picture sample and the target object in the subsequent picture sample under the same undersampling parameter.
In this embodiment of the application, before the step of obtaining the previous monitoring picture and the subsequent monitoring picture with different shooting times in the area monitoring picture, the method further includes a step of training to obtain the similar object trajectory following model, where the step includes:
firstly, training an object position determination submodel in an initial similar object track following model to obtain a submodel parameter of the corresponding object position determination submodel.
Then, a training sample pair is obtained, the training sample pair includes a first sample pair and a second sample pair, the first sample pair includes a first training sample image identical to the target object and the labeling similarity of the first training sample image, and the second sample pair includes a second training sample image different from the target object and the labeling similarity of the second training sample image.
Then, inputting the training sample pair into a preliminarily trained similar object track following model, and calculating a loss function value of the preliminarily trained similar object track following model according to the output similarity and the labeling similarity, wherein the preliminarily trained similar object track following model comprises the determined sub-model parameters of the object position determining sub-model.
And finally, when the loss function value is smaller than a preset threshold value, determining the corresponding parameter as the parameter corresponding to the similar object track following model to obtain the similar object track following model.
In the embodiment of the present application, step S104 may also be implemented in the following manner.
Firstly, according to the coordinate system determined by the intelligent lamp pole 10 and the direction information, the direction of the target object leaving the monitoring area where the intelligent lamp pole 10 is located is obtained, and according to the motion information of the target object, the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located is determined.
Specifically, the intersection point of the intelligent lamp pole 10 and the ground is used as an origin, the east-right-west direction is an X axis, the south-north direction is a Y axis, the intelligent lamp pole 10 is a Z axis to determine a three-dimensional coordinate system, and according to the determined coordinate system, the direction in which the target object leaves the monitoring area where the intelligent lamp pole 10 is located and the direction movement speed in which the target object leaves the monitoring area where the intelligent lamp pole 10 is located are determined.
Then, according to the direction and the movement speed of the target object leaving the monitoring area where the smart lamp pole 10 is located and the current position and the movement state of the pole-mounted unmanned aerial vehicle 12 in the coordinate system, control parameters for controlling the pole-mounted unmanned aerial vehicle 12 to track the target object are calculated, wherein the control parameters include parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12, parameters for controlling the movement direction of the pole-mounted unmanned aerial vehicle 12 and parameters for controlling the movement speed of the pole-mounted unmanned aerial vehicle 12.
The parameters for controlling the direction of movement of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the direction of the target object leaving the monitoring area where the intelligent lamp pole 10 is located and the current position of the pole-mounted unmanned aerial vehicle 12 in the coordinate system, the parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the environmental parameters of the position of the intelligent lamp pole 10 and the size of the target object, and the parameters for determining the movement speed of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the speed of the target object leaving the intelligent lamp pole 10 and the parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12.
And finally, sending the control parameters to the pole-mounted unmanned aerial vehicle 12 to control the pole-mounted unmanned aerial vehicle 12 to track the target object.
In this embodiment of the application, after step S104, the linkage tracking method for the unmanned aerial vehicle based on pole loading may further include:
acquiring monitoring video information fed back by the pole-mounted unmanned aerial vehicle 12;
performing object identification analysis on the monitoring video information to obtain object characteristic information, and comparing the object characteristic information with object characteristic information of the target object, wherein the object characteristic information of the target object is identified by the cloud platform from an image picture shot by the camera 11;
when the object characteristic information is consistent with the object characteristic information of the target object, performing object analysis on the target object in the monitoring video information to obtain the current motion information of the target object;
generating a control parameter of the pole-mounted unmanned aerial vehicle 12 according to the current motion information, and sending the control parameter of the pole-mounted unmanned aerial vehicle 12 to the pole-mounted unmanned aerial vehicle 12 so that the pole-mounted unmanned aerial vehicle 12 tracks the target object.
After the object is determined to be the target object, the motion information of the target object is analyzed, the control parameter of the pole-mounted unmanned aerial vehicle 12 is adjusted according to the analysis result, the control pole-mounted unmanned aerial vehicle 12 continuously monitors the target object, and the monitoring range of the intelligent lamp pole 10 is expanded.
Referring to fig. 4, fig. 4 is a schematic diagram of functional modules of a linkage tracking system 300 for a pole-based unmanned aerial vehicle according to an embodiment of the present disclosure, in this embodiment, the functional modules of the linkage tracking system 300 for a pole-based unmanned aerial vehicle may be divided according to the above method embodiments, that is, the following functional modules corresponding to the linkage tracking system 300 for a pole-based unmanned aerial vehicle may be used in each method embodiment. Wherein, this unmanned aerial vehicle linkage tracker 300 based on pole year can include and acquire module 310, confirm the module 320, detect and analysis module 330 and control module 340, explains in detail the function of each functional module of this unmanned aerial vehicle linkage tracker 300 based on pole year respectively below.
The acquiring module 310 is configured to acquire an image frame around the smart light pole 10 and an image capturing angle of the image capturing device 11 captured by the image capturing device 11 in real time.
The camera device 11 can obtain an image picture through rotation of the cradle head to obtain an image picture around the smart lamp pole 10, and the camera device 11 sends the obtained image picture and an image shooting angle when obtaining the image picture to the cloud platform 20.
The obtaining module 310 is configured to perform the step S101, and as for a detailed implementation of the obtaining module 310, reference may be made to the detailed description of the step S101.
The determining module 320 is configured to obtain an area monitoring picture of a monitoring area where the smart light pole 10 is located according to the received image picture of the periphery of the smart light pole 10 and the image capturing angle of the image capturing device 11.
The determining module 320 may generate a panoramic image including the surroundings of the smart light pole 10 according to the image frames captured by the image capturing device 11 and the image capturing angles when capturing the image frames, and in this step, the area monitoring image of the monitoring area where the smart light pole 10 is located may be a panoramic image. The update frequency of the panoramic image can be determined from the rotational frequency of the image pickup apparatus 11. For example, if the time that the image pickup apparatus 11 rotates one turn along with the pan/tilt head is 3 seconds, it can be determined that the image pickup apparatus 11 can update the area monitoring screen of the monitoring area at the speed of 3 seconds in the time period.
The determining module 320 is configured to perform the step S102, and as for a detailed implementation of the determining module 320, reference may be made to the detailed description of the step S102.
The detection and analysis module 330 is configured to detect whether the area monitoring picture includes a preconfigured target object, and if the target object exists in the detection area monitoring picture, analyze the target object in the area monitoring picture to obtain motion information.
The motion information may include a motion trajectory and a motion speed.
Specifically, the characteristic parameters of the target object may be configured in the cloud platform in advance, so as to determine whether the target object exists in the area monitoring picture by performing image detection and identification on the area monitoring picture, and when the target object exists, obtain the motion information of the target object by analyzing the position change of the target object in the area monitoring picture.
In the embodiment of the present application, the detection and analysis module 330 may be a software module for performing the following steps.
And a substep S1031 of obtaining a previous monitoring picture and a subsequent monitoring picture with different shooting times in the region monitoring picture.
In this sub-step, the previous monitoring picture and the subsequent monitoring picture with different shooting times may be monitoring pictures shot at adjacent times, or may not be monitoring pictures shot at adjacent times, and if the update cycle of the area monitoring picture is 3 seconds, the shooting time interval between the previous monitoring picture and the subsequent monitoring picture may be 3 seconds, or may be 3N seconds, where N is a positive integer greater than 1.
And a substep S1032 of detecting whether an object detection frame exists in the previous monitoring picture and the subsequent monitoring picture.
In this sub-step, whether a closed edge contour exists in the previous monitoring picture and the subsequent monitoring picture can be detected through an edge detection image algorithm (wherein the edge contour is a position where a gray-scale value of an image has a sudden change, and when a general target object appears in a monitoring area, the target object can be detected from the monitoring picture through the edge detection algorithm), and if the closed edge contour exists, it is determined that an object detection frame exists.
And a substep S1033, determining a first object detection frame in the previous monitoring picture and a second object detection frame in the subsequent monitoring picture if both the previous monitoring picture and the subsequent monitoring picture have object detection frames.
In the sub-step, determining the first object detection frame and the second object detection frame includes determining the position coordinates and the relative position coordinate relationship of each pixel point in them.
In sub-step S1034, similarity matching is performed between the target object in the first object detection frame and the target object in the second object detection frame.
The feature information of the target object in the first object detection frame and the feature information of the target object in the second object detection frame are obtained, wherein the feature information includes pixel point information of key parts, for example, when the target object is a person, the key parts can be five sense organs in a human face, and the pixel point information includes position coordinates of the pixel points.
Optionally, in this sub-step S1034, a distance value between the target object in the first object detection frame and the target object in the second object detection frame may be calculated based on a euclidean distance formula, and similarity matching may be performed according to a difference between the distance value and a preset distance threshold.
In the sub-step S1035, if the similarity matching is successful, motion information is obtained according to the position of the first object detection frame in the previous monitoring picture, the position of the second object detection frame in the subsequent monitoring picture, and the shooting time of the previous monitoring picture and the subsequent monitoring picture.
For a more detailed implementation of the detection and analysis module 330, reference may be made to the above detailed description of step S103.
The control module 340 is configured to determine whether the target object leaves the monitoring area where the smart lamp post 10 is located according to the motion information of the target object, obtain a direction and a motion speed of the target object leaving the monitoring area where the smart lamp post 10 is located if it is determined that the target object leaves the monitoring area where the smart lamp post 10 is located, and generate a control parameter according to the direction and the motion speed of the target object leaving the monitoring area where the smart lamp post 10 is located to control the pole-mounted unmanned aerial vehicle 12 to track the target object.
In this embodiment, the control module 340 may be specifically configured to:
firstly, according to the coordinate system determined by the intelligent lamp pole 10 and the direction information, the direction of the target object leaving the monitoring area where the intelligent lamp pole 10 is located is obtained, and according to the motion information of the target object, the motion speed of the target object leaving the monitoring area where the intelligent lamp pole 10 is located is determined.
Specifically, the intersection point of the intelligent lamp pole 10 and the ground is used as an origin, the east-right-west direction is an X axis, the south-north direction is a Y axis, the intelligent lamp pole 10 is a Z axis to determine a three-dimensional coordinate system, and according to the determined coordinate system, the direction in which the target object leaves the monitoring area where the intelligent lamp pole 10 is located and the direction movement speed in which the target object leaves the monitoring area where the intelligent lamp pole 10 is located are determined.
Then, according to the direction and the movement speed of the target object leaving the monitoring area where the smart lamp pole 10 is located and the current position and the movement state of the pole-mounted unmanned aerial vehicle 12 in the coordinate system, control parameters for controlling the pole-mounted unmanned aerial vehicle 12 to track the target object are calculated, wherein the control parameters include parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12, parameters for controlling the movement direction of the pole-mounted unmanned aerial vehicle 12 and parameters for controlling the movement speed of the pole-mounted unmanned aerial vehicle 12.
The parameters for controlling the direction of movement of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the direction of the target object leaving the monitoring area where the intelligent lamp pole 10 is located and the current position of the pole-mounted unmanned aerial vehicle 12 in the coordinate system, the parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the environmental parameters of the position of the intelligent lamp pole 10 and the size of the target object, and the parameters for determining the movement speed of the pole-mounted unmanned aerial vehicle 12 are determined by calculation according to the speed of the target object leaving the intelligent lamp pole 10 and the parameters for controlling the lifting height of the pole-mounted unmanned aerial vehicle 12.
And finally, sending the control parameters to the pole-mounted unmanned aerial vehicle 12 to control the pole-mounted unmanned aerial vehicle 12 to track the target object.
The control module 340 is configured to execute the step S104, and the detailed implementation of the control module 340 may refer to the detailed description of the step S104.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules may all be implemented in software (e.g., open source software) invoked by the processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the detecting and analyzing module 330 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the detecting and analyzing module 330 may be invoked by a processing element of the apparatus to perform the functions thereof. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Referring to fig. 5, fig. 5 shows a schematic hardware structure diagram of a cloud platform 20 for implementing the above-mentioned linkage tracking method for the pole-mounted drone according to the embodiment of the present disclosure. The cloud platform 20 may be implemented on a cloud server. As shown in fig. 5, the cloud platform 20 may include a processor 210, a computer-readable storage medium 220, a bus 230, and a radio frequency unit 240.
In a specific implementation process, the at least one processor 210 executes computer-executable instructions stored in the computer-readable storage medium 220 (for example, the modules included in a linkage tracking system 300 for a pole-based drone shown in fig. 4), so that the processor 210 may execute a linkage tracking method for a pole-based drone according to the above method embodiment, where the processor 210, the computer-readable storage medium 220, and the radio frequency unit 240 are connected by a bus 230, and the processor 210 may be configured to control the transceiving action of the radio frequency unit 240.
For a specific implementation process of the processor 210, reference may be made to the above-mentioned method embodiments executed by the cloud platform 20, and implementation principles and technical effects are similar, which are not described herein again.
The computer-readable storage medium 220 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 230 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the application also provides a readable storage medium, wherein the readable storage medium stores a computer execution instruction, and when a processor executes the computer execution instruction, the unmanned aerial vehicle linkage tracking method based on pole loading is realized.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Finally, it should be understood that the examples in this specification are only intended to illustrate the principles of the examples in this specification. Other variations are also possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A linkage tracking method for a pole-mounted unmanned aerial vehicle is applied to a cloud platform which is communicated with a camera device in a smart lamp pole and the pole-mounted unmanned aerial vehicle, and comprises the following steps:
acquiring an image picture around the intelligent lamp pole shot by the camera equipment in real time and an image shooting angle of the camera equipment;
obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment;
detecting whether the area monitoring picture comprises a preset target object or not, and if the area monitoring picture is detected to have the target object, analyzing the target object in the area monitoring picture to obtain motion information, wherein the motion information comprises a motion track and a motion speed;
and judging whether the target object leaves the monitoring area where the intelligent lamp post is located or not according to the motion information of the target object, if so, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp post is located so as to control the pole-mounted unmanned aerial vehicle to track the target object.
2. The linkage tracking method for pole-mounted unmanned aerial vehicle according to claim 1, wherein before the step of obtaining the image frames around the smart lamp pole and the image capturing angle of the camera device captured by the camera device in real time, the method further comprises a step of adjusting the capturing parameters of the camera device, and the step comprises:
adjusting shooting parameters of the camera shooting equipment, wherein the shooting parameters comprise a pitch angle between the camera shooting equipment and the ground and a height between the camera shooting equipment and the ground;
obtaining a shooting visual range corresponding to the camera equipment according to the pitch angle and the height between the camera equipment and the ground;
controlling the camera shooting equipment to shoot image frames around the intelligent lamp post and image shooting angles of the camera shooting equipment, and receiving the image frames and the corresponding image shooting angles sent by the camera shooting equipment;
detecting whether an area monitoring picture of a peripheral monitoring area of the intelligent lamp pole can be obtained or not according to the image picture and the corresponding image shooting angle;
if the area monitoring picture of the intelligent lamp pole peripheral monitoring area cannot be obtained, readjusting the shooting parameters of the camera equipment, and if the area monitoring picture of the intelligent lamp pole peripheral monitoring area can be obtained, detecting whether a verification object close to the edge of the area monitoring picture exists in the area monitoring picture, wherein the verification object is an object which is arranged in the intelligent lamp pole peripheral monitoring area in advance;
if the fact that the verification object is located at the edge of the area monitoring picture in the area monitoring picture is detected, the verification object is identified, and if the set object feature in the verification object cannot be identified, the shooting parameters of the camera shooting equipment are adjusted again; and if the set object characteristics of the verification object can be identified, finishing the adjustment of the shooting parameters of the camera equipment, wherein the verification object and the set object characteristics are configured in the cloud platform in advance.
3. The linkage tracking method for the pole-borne unmanned aerial vehicle according to claim 1, wherein the step of analyzing the target object in the area monitoring picture to obtain motion information comprises:
acquiring a previous monitoring picture and a subsequent monitoring picture which have different shooting times in the area monitoring picture;
detecting whether an object detection frame exists in the previous monitoring picture and the subsequent monitoring picture;
if the object detection frames exist in the previous monitoring picture and the subsequent monitoring picture, determining a first object detection frame in the previous monitoring picture and a second object detection frame in the subsequent monitoring picture;
similarity matching is carried out on the target object in the first object detection frame and the target object in the second object detection frame;
and if the similarity matching is successful, obtaining motion information according to the position of a first object detection frame in the previous monitoring picture, the position of a second object detection frame in the subsequent monitoring picture and the shooting time of the previous monitoring picture and the subsequent monitoring picture.
4. The linkage tracking method for the unmanned aerial vehicle based on the pole load as claimed in claim 3, wherein the step of determining the first object detection frame in the previous monitoring picture and the second object detection frame in the subsequent monitoring picture comprises:
inputting the previous monitoring picture and the subsequent monitoring picture into a similar object track following model, wherein the similar object track following model comprises an object position determining sub-model and an object track following sub-model;
determining a first object detection frame in the previous monitoring picture through the object position determining submodel, wherein the object position determining submodel is trained by utilizing a sample pair to obtain corresponding submodel parameters, and the sample pair comprises a sample picture and characteristic information of the object detection frame in the sample picture;
determining a second object detection frame in the subsequent monitoring picture through the object track following sub-model, wherein the object track following sub-model adopts sub-model parameters which are the same as those of the object position determining sub-model;
similarity matching is carried out on the target object in the first object detection frame and the target object in the second object detection frame, and the similarity matching comprises the following steps:
determining a previous target object information parameter of the first object detection frame by using a target object identification layer in the similar object track following model;
determining subsequent target object information parameters of the second object detection frame by utilizing a following object identification layer in the similar object track following model;
the similar object track following model determines a difference parameter between the target object in the first object detection frame and the target object in the second object detection frame according to the previous target object information parameter and the subsequent target object information parameter;
and comparing the difference parameter with a preset difference parameter threshold, if the difference parameter is smaller than the preset difference parameter threshold, determining that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful, otherwise, determining that the similarity matching between the target object in the first object detection frame and the target object in the second object detection frame is successful.
5. The linkage tracking method for the unmanned aerial vehicle based on the pole load as claimed in claim 4, wherein the step of determining the first object detection frame in the previous monitoring picture through the object position determination submodel comprises:
determining a sub-model by utilizing the object position, and performing previous undersampling processing on the previous monitoring picture for a set number of times to obtain a previous picture sample for the set number of times; the set times of the under-sampling parameters of the prior under-sampling treatment are different;
for each previous picture sample, the object position determination submodel determining feature information of a first object detection box in the previous picture sample;
the step of determining a second object detection frame in the subsequent monitoring picture through the object track following submodel includes:
performing subsequent undersampling processing on the subsequent monitoring picture for a set number of times by using the object track following sub-model to obtain a subsequent picture sample for the set number of times; the under-sampling parameters of the subsequent under-sampling processing correspond to the under-sampling parameters of the prior under-sampling processing one to one;
for each subsequent picture sample, the object track following sub-model determines feature information of a second object detection frame in the subsequent picture sample;
determining difference parameters of a target object in the first object detection frame in the previous picture sample and a target object in the second object detection frame in the subsequent picture sample under the condition that the same under-sampling parameters are respectively determined according to the feature information of the first object detection frame in the previous picture sample and the feature information of the second object detection frame in the subsequent picture sample;
and calculating a difference parameter between the target object in the first object detection frame and the target object in a second object detection frame of a subsequent picture sample according to the difference parameter values corresponding to different undersampling parameters.
6. The linkage tracking method for the unmanned aerial vehicle based on the pole loading as claimed in claim 4, wherein before the step of obtaining the previous monitoring picture and the subsequent monitoring picture with different shooting times in the area monitoring picture, the method further comprises a step of training to obtain a similar object track following model, and the step comprises:
training an object position determination submodel in the initial similar object track following model to obtain submodel parameters of the corresponding object position determination submodel;
acquiring a training sample pair, wherein the training sample pair comprises a first sample pair and a second sample pair, the first sample pair comprises a first training sample image which is the same as a target object and the labeling similarity of the first training sample image, and the second sample pair comprises a second training sample image which is different from the target object and the labeling similarity of the second training sample image;
inputting the training sample pair into a preliminarily trained similar object track following model, and calculating a loss function value of the preliminarily trained similar object track following model according to the output similarity and the labeling similarity, wherein the preliminarily trained similar object track following model comprises the determined sub-model parameters of the object position determination sub-model;
and when the loss function value is smaller than a preset threshold value, determining the corresponding parameter as the parameter corresponding to the similar object track following model to obtain the similar object track following model.
7. The linkage tracking method for the pole-mounted unmanned aerial vehicle according to claim 1, wherein the step of obtaining the direction and the moving speed of the target object leaving the monitoring area where the intelligent lamp pole is located, and generating control parameters according to the direction and the moving speed of the target object leaving the monitoring area where the intelligent lamp pole is located to control the pole-mounted unmanned aerial vehicle to track the target object comprises the following steps:
according to the coordinate system determined by the intelligent lamp pole and the azimuth information, the direction of the target object leaving the monitoring area where the intelligent lamp pole is located is obtained, and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole is located is determined and obtained according to the movement information of the target object;
calculating control parameters for controlling the pole-mounted unmanned aerial vehicle to track the target object according to the direction and the movement speed of the target object leaving the monitoring area where the intelligent lamp pole is located and the current position and the movement state of the pole-mounted unmanned aerial vehicle in the coordinate system, wherein the control parameters comprise a parameter for controlling the lifting height of the pole-mounted unmanned aerial vehicle, a parameter for controlling the movement direction of the pole-mounted unmanned aerial vehicle and a parameter for controlling the movement speed of the pole-mounted unmanned aerial vehicle;
the parameter used for controlling the movement direction of the pole-mounted unmanned aerial vehicle is determined according to the direction of the target object leaving the monitoring area where the intelligent lamp pole is located and the current position of the pole-mounted unmanned aerial vehicle in the coordinate system through calculation, the parameter used for controlling the lifting height of the pole-mounted unmanned aerial vehicle is determined according to the environmental parameter of the position where the intelligent lamp pole is located and the size of the target object through calculation, and the parameter used for determining the movement speed of the pole-mounted unmanned aerial vehicle is determined according to the speed of the target object leaving the intelligent lamp pole and the parameter of the lifting height of the pole-mounted unmanned aerial vehicle;
and sending the control parameters to the rod-mounted unmanned aerial vehicle to control the rod-mounted unmanned aerial vehicle to track the target object.
8. The linkage tracking method for the unmanned aerial vehicle based on the pole load as claimed in claim 7, wherein after the step of sending the control parameters to the unmanned aerial vehicle to control the unmanned aerial vehicle to track the target object, the method further comprises:
acquiring monitoring video information fed back by the pole-mounted unmanned aerial vehicle;
carrying out object identification analysis on the monitoring video information to obtain object characteristic information, and comparing the object characteristic information with the object characteristic information of the target object, wherein the object characteristic information of the target object is identified from an image picture shot by the camera equipment by the cloud platform;
when the object characteristic information is consistent with the object characteristic information of the target object, performing object analysis on the target object in the monitoring video information to obtain the current motion information of the target object;
and generating a control parameter of the pole-mounted unmanned aerial vehicle according to the current motion information, and sending the control parameter of the pole-mounted unmanned aerial vehicle to the pole-mounted unmanned aerial vehicle so that the pole-mounted unmanned aerial vehicle tracks the target object.
9. The utility model provides an unmanned aerial vehicle linkage tracker based on pole year, a serial communication port, the system be applied to with the wisdom lamp pole in camera equipment and the pole carry the cloud platform of unmanned aerial vehicle communication, the system includes:
the acquisition module is used for acquiring the image frames around the intelligent lamp post shot by the camera equipment in real time and the image shooting angle of the camera equipment;
the determining module is used for obtaining an area monitoring picture of a monitoring area where the intelligent lamp post is located according to the received image picture of the periphery of the intelligent lamp post and the image shooting angle of the camera equipment;
the detection and analysis module is used for detecting whether the area monitoring picture comprises a preset target object or not, and if the area monitoring picture is detected to have the target object, analyzing the target object in the area monitoring picture to obtain motion information, wherein the motion information comprises a motion track and a motion speed;
the control module is used for judging whether the target object leaves the monitoring area where the intelligent lamp pole is located or not according to the motion information of the target object, if so, acquiring the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole is located, and generating control parameters according to the direction and the motion speed of the target object leaving the monitoring area where the intelligent lamp pole is located so as to control the pole-mounted unmanned aerial vehicle to track the target object.
10. A cloud platform, characterized in that the cloud platform comprises a processor, a computer-readable storage medium and a communication unit, the computer-readable storage medium, the communication unit and the processor are connected through a bus interface, the communication unit is used for being in communication connection with a camera device and a pole-mounted unmanned aerial vehicle, the computer-readable storage medium is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the computer-readable storage medium to execute the pole-mounted unmanned aerial vehicle linkage tracking method according to any one of claims 1 to 8.
CN202210900821.5A 2022-07-28 2022-07-28 Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform Active CN114979497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210900821.5A CN114979497B (en) 2022-07-28 2022-07-28 Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210900821.5A CN114979497B (en) 2022-07-28 2022-07-28 Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform

Publications (2)

Publication Number Publication Date
CN114979497A true CN114979497A (en) 2022-08-30
CN114979497B CN114979497B (en) 2022-11-08

Family

ID=82969840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210900821.5A Active CN114979497B (en) 2022-07-28 2022-07-28 Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform

Country Status (1)

Country Link
CN (1) CN114979497B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012368A (en) * 2023-02-16 2023-04-25 江西惜能照明有限公司 Security monitoring method and system based on intelligent lamp post, storage medium and computer

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011009893A (en) * 2009-06-24 2011-01-13 Nec Corp Follow object detecting apparatus, follow object detection method and follow object detection program
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN109996039A (en) * 2019-04-04 2019-07-09 中南大学 A kind of target tracking method and device based on edge calculations
CN209805977U (en) * 2019-06-24 2019-12-17 厦门日华科技股份有限公司 wisdom city monitored control system
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system
CN111523397A (en) * 2020-03-31 2020-08-11 深圳市奥拓电子股份有限公司 Intelligent lamp pole visual identification device, method and system and electronic equipment
CN113053105A (en) * 2021-02-26 2021-06-29 吴江市腾凯通信工程有限公司 Multi-component intelligent monitoring system for urban road
CN114785951A (en) * 2022-04-19 2022-07-22 大庆安瑞达科技开发有限公司 Positioning and tracking method based on linkage of high tower monitoring equipment and unmanned aerial vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011009893A (en) * 2009-06-24 2011-01-13 Nec Corp Follow object detecting apparatus, follow object detection method and follow object detection program
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN109996039A (en) * 2019-04-04 2019-07-09 中南大学 A kind of target tracking method and device based on edge calculations
CN209805977U (en) * 2019-06-24 2019-12-17 厦门日华科技股份有限公司 wisdom city monitored control system
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system
CN111523397A (en) * 2020-03-31 2020-08-11 深圳市奥拓电子股份有限公司 Intelligent lamp pole visual identification device, method and system and electronic equipment
CN113053105A (en) * 2021-02-26 2021-06-29 吴江市腾凯通信工程有限公司 Multi-component intelligent monitoring system for urban road
CN114785951A (en) * 2022-04-19 2022-07-22 大庆安瑞达科技开发有限公司 Positioning and tracking method based on linkage of high tower monitoring equipment and unmanned aerial vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012368A (en) * 2023-02-16 2023-04-25 江西惜能照明有限公司 Security monitoring method and system based on intelligent lamp post, storage medium and computer

Also Published As

Publication number Publication date
CN114979497B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
US11205274B2 (en) High-performance visual object tracking for embedded vision systems
CN111784746B (en) Multi-target pedestrian tracking method and device under fish-eye lens and computer system
CN108198199B (en) Moving object tracking method, moving object tracking device and electronic equipment
CN111062974A (en) Method and system for extracting foreground target by removing ghost
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
CN114979497B (en) Unmanned aerial vehicle linkage tracking method and system based on pole loading and cloud platform
CN111160365A (en) Unmanned aerial vehicle target tracking method based on combination of detector and tracker
Unlu et al. Deep learning-based visual tracking of UAVs using a PTZ camera system
Kyrkou C 3 Net: end-to-end deep learning for efficient real-time visual active camera control
CN114359669A (en) Picture analysis model adjusting method and device and computer readable storage medium
CN107909024B (en) Vehicle tracking system and method based on image recognition and infrared obstacle avoidance and vehicle
CN113438469A (en) Automatic testing method and system for security camera
CN110710194B (en) Exposure method and device, camera module and electronic equipment
CN112396634A (en) Moving object detection method, moving object detection device, vehicle and storage medium
CN111654668A (en) Monitoring equipment synchronization method and device and computer terminal
CN114037741B (en) Self-adaptive target detection method and device based on event camera
EP2410115B1 (en) System for controlling automatic gates
CN114239736A (en) Method and device for training optical flow estimation model
CN112422895A (en) Image analysis tracking and positioning system and method based on unmanned aerial vehicle
CN114511897A (en) Identity recognition method, system, storage medium and server
CN112819859A (en) Multi-target tracking method and device applied to intelligent security
CN113920089A (en) Target detection method and device and electronic controller
AU2009230796A1 (en) Location-based brightness transfer function
CN112966575A (en) Target face recognition method and device applied to smart community
KR101210866B1 (en) An object tracking system based on a PTZ(Pan-Tilt-Zoom) camera using Mean-shift algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant