CN113516106A - Unmanned aerial vehicle intelligent vehicle identification method and system based on city management - Google Patents

Unmanned aerial vehicle intelligent vehicle identification method and system based on city management Download PDF

Info

Publication number
CN113516106A
CN113516106A CN202111051756.5A CN202111051756A CN113516106A CN 113516106 A CN113516106 A CN 113516106A CN 202111051756 A CN202111051756 A CN 202111051756A CN 113516106 A CN113516106 A CN 113516106A
Authority
CN
China
Prior art keywords
vehicle
feature
target
description
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111051756.5A
Other languages
Chinese (zh)
Other versions
CN113516106B (en
Inventor
杨翰翔
杨德润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lianhe Intelligent Technology Co ltd
Original Assignee
Shenzhen Lianhe Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lianhe Intelligent Technology Co ltd filed Critical Shenzhen Lianhe Intelligent Technology Co ltd
Priority to CN202111051756.5A priority Critical patent/CN113516106B/en
Publication of CN113516106A publication Critical patent/CN113516106A/en
Application granted granted Critical
Publication of CN113516106B publication Critical patent/CN113516106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an unmanned aerial vehicle intelligent vehicle identification method and system based on city management. Therefore, the corresponding target unmanned aerial vehicle can be determined to perform real-time tracking identification and positioning on the target vehicle according to the second vehicle description information.

Description

Unmanned aerial vehicle intelligent vehicle identification method and system based on city management
Technical Field
The invention relates to the technical field of smart cities and monitoring, in particular to an unmanned aerial vehicle smart vehicle identification method and system based on city management.
Background
Unmanned Aerial Vehicles (UAVs) are also known as drones. With the rapid development of unmanned flight technology, consumer unmanned aerial vehicles are widely applied in various industries and used for replacing people to execute corresponding work.
Further, with the continuous acceleration of the progress of smart cities, the application of the unmanned aerial vehicle in the field of smart cities (such as smart city management) is also widely popularized. For example, unmanned aerial vehicle is used for various fields such as wisdom city traffic control and commander, automatic food delivery, wisdom city commodity circulation, very big having made things convenient for people daily work and life, makes the city become more and more "intellectuality" simultaneously.
However, in the process of city management based on smart city application unmanned aerial vehicles, for example, city management based on road traffic, a situation that target vehicles are monitored in some specific situations often occurs, for example, monitoring, tracking and positioning for suspected vehicles, hit vehicles, emergency help vehicles needing help, and the like, so as to help completion of corresponding tasks. However, in practical applications, it is difficult to accurately identify and locate the target vehicle for situations where some important features of the target vehicle (such as number plate, vehicle type, etc.) are unknown or missing at the beginning of monitoring. Therefore, how to realize automatic accurate identification and positioning of a target vehicle under the condition that certain characteristics are unknown is a key problem which needs to be solved urgently at present.
Disclosure of Invention
In order to solve the above problem, an object of an embodiment of the present invention is to provide an unmanned aerial vehicle intelligent vehicle identification method based on city management, which is applied to an unmanned aerial vehicle control center, and the method includes:
receiving first vehicle description information of a target vehicle, and sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the first vehicle description information to enable each unmanned aerial vehicle to perform primary identification on the target vehicle in a corresponding area, wherein the first vehicle description information comprises a first vehicle description index, and the first vehicle description index lacks an index feature corresponding to at least one target feature description dimension in a plurality of preset target feature description dimensions of the target vehicle;
receiving a primary identification result fed back by each unmanned aerial vehicle in a primary identification process, and obtaining second vehicle description information of the target vehicle according to the primary identification result, wherein the primary identification result comprises at least one monitoring video picture of which the matching degree with the first vehicle description information reaches a first preset matching degree;
determining at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle according to the second vehicle description information;
and sending a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information, so that the target unmanned aerial vehicle carries out real-time tracking, monitoring and positioning on the target vehicle.
In view of the above, the obtaining second vehicle description information of the target vehicle according to the preliminary identification result includes:
acquiring a monitoring video picture, of which the matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle;
calling a monitoring end vehicle identification model aiming at each monitoring video picture, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
acquiring vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain second vehicle description information;
the determining, according to the second vehicle description information, that at least one unmanned aerial vehicle in the set monitoring area is a target unmanned aerial vehicle includes:
and determining at least one unmanned aerial vehicle feeding back the at least one target monitoring picture as the target unmanned aerial vehicle according to the at least one target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information.
Based on the above purpose, the method further comprises a step of performing model training on the monitoring end vehicle recognition model, wherein the step comprises:
acquiring a training sample library, wherein the training sample library comprises a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes;
acquiring a predetermined neural network model, wherein the neural network model comprises a convolution network layer, a feature conversion layer and a result output layer;
for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the convolution network layer;
performing feature conversion on the vehicle feature information under each target feature description dimension through the feature conversion layer to obtain an index feature corresponding to the vehicle feature information under each target feature description dimension;
obtaining a training vehicle description index according to the index features corresponding to the vehicle feature information under the target feature description dimension through the result output layer;
calculating to obtain a first loss function value according to the training vehicle description index and the calibration vehicle description index;
iteratively optimizing the model parameters of the neural network model according to the first loss function value until the first loss function value meets a first convergence condition, and obtaining a trained neural network model as the monitoring end vehicle recognition model; the first loss function value is obtained by calculating a first matching degree of each index feature in each training vehicle description index and each index feature corresponding to the calibration vehicle description index, and the first convergence condition includes that the first matching degree represented by the first loss function value reaches a first preset matching degree threshold value.
Based on the above object, the method further comprises:
acquiring a training sample library, wherein the training sample library comprises a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes;
obtaining a predetermined neural network model, and carrying out model compression processing on the neural network model to obtain a compressed neural network model;
for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the neural network model, and calculating to obtain a second loss function value according to the vehicle feature information under the target feature description dimensions and index features included in the calibrated vehicle description index;
iteratively optimizing the model parameters of the compressed neural network model according to the second loss function value until the second loss function value meets a second convergence condition, and obtaining a trained neural network model as an airborne vehicle identification model; the second loss function value is obtained by calculating a second matching degree of the vehicle feature information under each target feature description dimension and each corresponding index feature in the calibrated vehicle description index, the second convergence condition includes that the second matching degree represented by the second loss function value reaches a second preset matching degree threshold value, and the second preset matching degree threshold value is smaller than the first preset matching degree threshold value;
and issuing the airborne vehicle identification model to each unmanned aerial vehicle, so that the unmanned aerial vehicles identify the target vehicles of the vehicles in the set monitoring area according to the airborne vehicle identification model, and feeding back the preliminary identification result to the unmanned aerial vehicle control center.
Based on the above object, the obtaining of the training sample library includes:
obtaining vehicle monitoring pictures in a set scene through a plurality of unmanned aerial vehicles to obtain a plurality of vehicle monitoring pictures;
storing each vehicle monitoring picture as a vehicle monitoring picture sample into a preset sample database;
extracting vehicle feature information of each vehicle monitoring picture sample in the sample database under a plurality of target feature description dimensions to obtain a feature extraction result corresponding to each vehicle monitoring picture sample;
according to the feature extraction result corresponding to each vehicle monitoring picture sample, sample filtering is carried out on the vehicle monitoring picture samples in the sample database to obtain a filtered sample database;
obtaining a calibration vehicle description index corresponding to each vehicle monitoring picture sample according to a feature extraction result corresponding to each vehicle monitoring picture sample in a filtered sample database, and performing associated storage on the vehicle description index and the vehicle monitoring picture sample in the sample database to obtain a training sample database;
according to the feature extraction result corresponding to each vehicle monitoring picture sample, sample filtering is carried out on the vehicle monitoring picture samples in the sample database, and the obtained filtered sample database comprises the following steps:
determining whether feature loss exists in the feature extraction result corresponding to each vehicle monitoring picture sample;
if the characteristics are missing, deleting the vehicle monitoring picture sample from the training sample library;
the characteristic missing comprises that the characteristic extraction result corresponding to the vehicle monitoring picture sample lacks vehicle characteristic information under a predetermined target characteristic description dimension or lacks vehicle characteristic information under a preset number of target characteristic description dimensions.
Based on the above purpose, the obtaining of the training sample library further includes:
copying a part of vehicle monitoring picture samples in the training sample library as samples to be processed;
performing feature fuzzy processing on index features corresponding to at least one target feature description dimension in the vehicle description index corresponding to the sample to be processed, wherein the feature fuzzy processing comprises replacing or deleting the corresponding index features with the set fuzzy features;
adding the to-be-processed sample after the characteristic fuzzy processing as an extended training sample into the training sample library, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library;
the method for processing the vehicle description indexes comprises the following steps of:
adding the sample to be processed into a pre-established sample sequence to obtain a sample sequence to be processed;
determining the number of samples required for feature fuzzy processing of each target feature description dimension;
acquiring a corresponding number of samples to be processed from the sample sequence to be processed according to the number of samples required to be subjected to feature fuzzy processing by the ith target feature description dimension, and performing feature fuzzy processing on the index feature corresponding to the ith target feature description dimension corresponding to the samples to be processed to obtain an ith extended sample sequence; wherein i is a natural number which is more than or equal to 1 and less than or equal to N;
adding the to-be-processed sample after the characteristic fuzzy processing into the training sample library as an extended training sample, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library, including:
and sequentially adding the obtained ith expansion sample sequence into the training sample library, and performing sample disordering treatment on the training sample library after the Nth expansion sample sequence is added into the training sample library.
Based on the above purpose, the performing feature fuzzy processing on the index feature corresponding to at least one target feature description dimension in the vehicle description index corresponding to the to-be-processed sample to obtain a training sample library further includes:
determining at least one preset dimension combination which is obtained by combining at least two target feature description dimensions and corresponds to the feature fuzzy processing;
and aiming at each dimension combination, acquiring at least one corresponding sample to be processed from the samples to be processed, and performing multi-feature fuzzy processing on the index features, corresponding to the target feature description dimensions, in the acquired samples to be processed.
Based on the purpose, the first vehicle description information further includes first time-space domain information for the target vehicle, and the preliminary identification result further includes second time-space domain information corresponding to the monitoring video pictures fed back by each unmanned aerial vehicle; the receiving of the preliminary identification result fed back by each unmanned aerial vehicle in the preliminary identification process and the obtaining of the second vehicle description information of the target vehicle according to the preliminary identification result include:
performing confidence decision on each unmanned aerial vehicle according to the first time-space domain information and the second time-space domain information to obtain confidence parameters between each unmanned aerial vehicle and the target vehicle;
filtering the monitoring video pictures fed back by the unmanned aerial vehicle with the corresponding confidence coefficient parameters smaller than a preset confidence coefficient threshold value to obtain a monitoring picture sequence to be decided;
obtaining second vehicle description information of the target vehicle according to each monitoring video picture in the monitoring picture sequence to be decided;
wherein the performing a confidence decision for each of the unmanned aerial vehicles according to the first time-space domain information and the second time-space domain information comprises:
acquiring first time information and first position information corresponding to the target vehicle according to the first time-space domain information;
acquiring second time information and second position information corresponding to the monitoring video pictures fed back by the unmanned aerial vehicles according to the second time-space domain information;
calling an electronic map corresponding to the set monitoring area, and determining a feasible path and corresponding prediction time for the target vehicle to reach a second position corresponding to the second position information from a first position corresponding to the first position information according to the electronic map;
and determining confidence coefficient parameters corresponding to the target vehicle and each unmanned aerial vehicle according to the predicted time and the time interval of the first time information and the second time information.
Another object of the present invention is to provide an unmanned aerial vehicle intelligent vehicle identification system based on city management, which is applied to an unmanned aerial vehicle control center, and the system includes:
the first identification module is used for receiving first vehicle description information of a target vehicle, sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the vehicle description information, and enabling each unmanned aerial vehicle to perform primary identification on the target vehicle in a corresponding area, wherein the first vehicle description information comprises a first vehicle description index;
the second identification module is used for receiving a primary identification result fed back by each unmanned aerial vehicle in a primary identification process and obtaining second vehicle description information of the target vehicle according to the primary identification result, wherein the primary identification result comprises at least one monitoring video picture of which the matching degree with the first vehicle description information reaches a first preset matching degree;
the target determining module is used for determining at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle according to the second vehicle description information; and
and the tracking and monitoring module is used for sending a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information so that the target unmanned aerial vehicle can perform real-time tracking, monitoring and positioning on the target vehicle.
Based on the above purpose, the second identification module is specifically configured to:
acquiring a monitoring video picture, of which the matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle;
calling a monitoring end vehicle identification model aiming at each monitoring video picture, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
acquiring vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain second vehicle description information;
the target determination module is specifically configured to: and determining the unmanned aerial vehicle feeding back the target monitoring picture as the target unmanned aerial vehicle according to the target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information.
It is still another object of an embodiment of the present invention to provide a drone control center, including a drone smart vehicle recognition system, a processor, and a machine-readable storage medium, the drone smart vehicle recognition system including one or more software functional modules, programs, or instructions stored in the machine-readable storage medium, the machine-readable storage medium being connected to the processor, the processor being configured to execute the drone smart vehicle recognition system including one or more software functional modules, programs, or instructions stored in the machine-readable storage medium, so as to implement a drone smart vehicle recognition method based on city management.
In summary, the unmanned aerial vehicle intelligent vehicle identification method and system based on city management provided by the embodiments of the present invention first receive first vehicle description information of a target vehicle, sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the first vehicle description information so that each unmanned aerial vehicle carries out primary identification on the target vehicle in the corresponding area, then receiving the primary recognition results fed back by each unmanned aerial vehicle in the primary recognition process, obtaining second vehicle description information of the target vehicle according to the primary recognition results, secondly, according to the second vehicle description information, at least one unmanned aerial vehicle in the set monitoring area is determined to serve as a target unmanned aerial vehicle, and finally, a tracking monitoring instruction is sent to the target unmanned aerial vehicle according to the second vehicle description information, so that the target unmanned aerial vehicle carries out real-time tracking monitoring and positioning on the target vehicle. Therefore, the first vehicle description information of the target vehicle with the missing part of the characteristics can be sent to all unmanned aerial vehicles, the unmanned aerial vehicles perform primary identification on the target vehicle based on the first vehicle description information, the primary identification result is fed back to the unmanned aerial vehicle control center, secondary identification is performed by the unmanned aerial vehicle control center according to the primary identification result, the target vehicle is accurately identified, and then the second vehicle description information (without the missing characteristics) of the target vehicle is obtained. Therefore, the corresponding target unmanned aerial vehicle can be determined to perform real-time tracking identification and positioning on the target vehicle according to the second vehicle description information.
In addition, in this embodiment, vehicle identification may be performed on the target vehicle in an artificial intelligence manner. In detail, can be used for realizing aiming at through set up on unmanned aerial vehicle machine carries vehicle identification model the preliminary discernment of target vehicle to set up control end vehicle identification model at unmanned aerial vehicle control center, carry out the accurate discernment of secondary to the target vehicle to the control video picture that includes in the preliminary discernment result of unmanned aerial vehicle feedback, in order to determine the target vehicle that needs to follow tracks of control discernment. Wherein the on-board vehicle identification model may be a scaled-down version of the monitoring-end vehicle identification model to be suitable for implementation on an unmanned aerial vehicle. For example, the on-board vehicle identification model may be obtained by performing model compression training on the basis of an original network model of the monitoring-end vehicle identification model. Therefore, accurate identification and tracking monitoring of the target vehicle can be further realized in an artificial intelligence model mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of an unmanned aerial vehicle intelligent vehicle identification method based on city management according to an embodiment of the present invention.
Fig. 2 is a schematic application environment diagram of the unmanned aerial vehicle intelligent vehicle identification method based on city management according to the embodiment of the invention.
Fig. 3 is a schematic structural diagram of an unmanned aerial vehicle control center provided in an embodiment of the present invention.
Fig. 4 is a schematic diagram of an unmanned aerial vehicle intelligent vehicle identification system based on city management according to an embodiment of the present invention.
Detailed Description
Referring to fig. 1, fig. 1 is a schematic flowchart of an unmanned aerial vehicle intelligent vehicle identification method based on city management according to an embodiment of the present invention. In the embodiment of the present invention, as shown in fig. 2, the method may be executed and implemented by the drone control center 100 for managing and scheduling the drones. In this embodiment, the drone control center 100 may be a service platform that is set up based on a smart city and is used to remotely communicate with a plurality of drones 200 in a preset control area to remotely control and schedule the drones 200. By way of example, the drone control center 100 may be, but is not limited to, a server, a computer device, a cloud service center, a machine room control center, a cloud platform, and the like, which have communication control capability and big data analysis capability.
The above method is described in detail below, and in the present embodiment, the method includes the steps of S100 to S400 described below.
Step S100, receiving first vehicle description information of a target vehicle, and sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the first vehicle description information, so that each unmanned aerial vehicle performs primary identification of the target vehicle in a corresponding area.
In this embodiment, the first vehicle description information includes a first vehicle description index, and the first vehicle description index lacks an index feature corresponding to at least one target feature description dimension of the target vehicle in a plurality of preset target feature description dimensions. For example, the plurality of target feature description dimensions may include, but are not limited to, a plurality of dimensions such as appearance color, vehicle type, vehicle behavior, vehicle unique identification, driver features, and on-board population features, which may form a global feature description dimension sequence for describing global features of the target vehicle. The first vehicle description information may be generated by the unmanned aerial vehicle monitoring center according to currently grasped local information at the beginning of the unmanned aerial vehicle monitoring center receiving an instruction for monitoring, tracking and identifying a target vehicle, for example, after description text information of the target vehicle is manually input, the unmanned aerial vehicle monitoring center extracts a required feature text from the description text information according to a preset description format according to the description text information, and generates the first vehicle description information according to the preset description format, which is not limited specifically.
And S200, receiving the primary recognition results fed back by each unmanned aerial vehicle in the primary recognition process, and obtaining second vehicle description information of the target vehicle according to the primary recognition results.
In this embodiment, the preliminary identification result includes at least one monitoring video picture whose matching degree with the first vehicle description information reaches a first preset matching degree;
step S300, determining at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle according to the second vehicle description information. For example, an unmanned aerial vehicle corresponding to at least one target surveillance video picture involved in obtaining the second vehicle description information may be determined as the target unmanned aerial vehicle.
And S400, sending a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information, so that the target unmanned aerial vehicle carries out real-time tracking, monitoring and positioning on the target vehicle.
In this embodiment, in step S400, when the number of target drones is greater than or equal to 2, there is a possibility that there is a target drone that is erroneously tracked (for example, the tracked target is not the target vehicle or one vehicle that is close to the target vehicle is tracked). Based on this, in order to avoid this situation, the drone control center 100 may issue a real-time picture feedback instruction to each target drone, so that each target drone feeds back a current real-time monitoring picture, determine whether there is a target drone that is mis-tracked according to the real-time monitoring picture fed back by each target drone, and if there is a target drone that is mis-tracked, issue a tracking stop instruction to the wood target drone that is mis-tracked, and stop the corresponding tracking and monitoring operation.
In summary, in the embodiment of the present invention, the first vehicle description information of the target vehicle lacking some features is sent to each unmanned aerial vehicle, each unmanned aerial vehicle performs primary identification on the target vehicle based on the first vehicle description information, and feeds back the primary identification result to the unmanned aerial vehicle control center, and then the unmanned aerial vehicle control center performs secondary identification according to the primary identification result, so as to accurately identify the target vehicle, and further obtain the second vehicle description information of the target vehicle (without lacking features). Therefore, the corresponding target unmanned aerial vehicle can be determined to perform real-time tracking identification and positioning on the target vehicle according to the second vehicle description information.
In this embodiment, vehicle identification may be performed on the target vehicle in an artificial intelligence manner. In detail, can be used for realizing aiming at through set up on unmanned aerial vehicle machine carries vehicle identification model the preliminary discernment of target vehicle to set up control end vehicle identification model at unmanned aerial vehicle control center, carry out the accurate discernment of secondary to the target vehicle to the control video picture that includes in the preliminary discernment result of unmanned aerial vehicle feedback, in order to determine the target vehicle that needs to follow tracks of control discernment. Wherein the on-board vehicle identification model may be a scaled-down version of the monitoring-end vehicle identification model to be suitable for implementation on an unmanned aerial vehicle. For example, the on-board vehicle identification model may be obtained by performing model compression and training on the basis of an original network model of the monitoring-end vehicle identification model, which will be described in detail later.
The following describes in detail the detailed implementation method of the above steps of the embodiment of the present invention by way of example.
First, with respect to step S200, obtaining second vehicle description information of the target vehicle according to the preliminary recognition result may be achieved by the following exemplary embodiments. In the exemplary embodiment, this may be implemented by reference to an artificial intelligence model, described in detail below.
Step S2001, obtaining a surveillance video frame, whose matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle.
In this embodiment, each unmanned aerial vehicle can carry out a rough recognition with relatively low accuracy to the vehicle in the monitoring picture according to first vehicle description information, screens out the monitoring video picture that compounds first vehicle description information to a certain extent and feeds back to the unmanned aerial vehicle monitoring center as the preliminary identification result in real time, so unmanned aerial vehicle can not lead to the monitoring task because of the resource is not enough and receive the influence because of the large amount of operations that need carry out accurate identification. Simultaneously, through each unmanned aerial vehicle as the primary screening at edge terminal, can make the data screening that follow-up unmanned aerial vehicle surveillance center goes on and discern the work and reduce greatly, help providing the efficiency of whole control discernment.
Step S2002, for each surveillance video frame, a surveillance vehicle identification model is called, and vehicle feature information in each feature description dimension is extracted from the surveillance video frame through a convolutional network layer included in the surveillance vehicle identification model and corresponding to each of the feature description dimensions.
Step S2003, performing feature conversion on the vehicle feature information in each feature description dimension through a feature conversion layer included in the monitoring-end vehicle identification model, to obtain an index feature corresponding to the vehicle feature information in each feature description dimension.
Step S2004, calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree. Wherein the second preset matching degree is greater than the first preset matching degree.
In this embodiment, because the preliminary identification result of unmanned aerial vehicle surveillance center to unmanned aerial vehicle feedback need carry out the accuracy and compare, consequently the second that sets up is preset the matching degree and need be greater than first preset matching degree to carry out accurate comparison discernment to target vehicle.
Step S2005, a preset global feature description dimension sequence is acquired, where the global feature description dimension sequence includes a plurality of target feature description dimensions for the target vehicle.
In this embodiment, the global feature description dimension sequence mentioned here may be the above-mentioned global feature description dimension sequence formed by a plurality of target feature description dimensions, such as color, vehicle type, vehicle behavior, vehicle unique identifier, driver feature, and vehicle population feature, and used for describing the global feature of the target vehicle.
Step S2006, determining a missing feature description dimension for the target vehicle according to the global feature description dimension sequence and the first vehicle description index of the first vehicle description information.
In this embodiment, the missing feature description dimension may be obtained by matching or comparing each index feature in the first vehicle description index in the first vehicle description information with each target feature description dimension in the global feature description dimension sequence, determining which target feature description dimension is missing in the first vehicle description index, and then determining the corresponding target feature description dimension as the missing feature description dimension. Therefore, the missing feature description dimension can be supplemented subsequently, so that the first vehicle description information is optimized to obtain the second vehicle description information.
Step S2007, obtaining the vehicle missing feature information under the missing feature description dimension from the target screen, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain the second vehicle description information.
In this embodiment, after determining the missing feature description dimension, the unmanned aerial vehicle control center may extract vehicle missing feature information corresponding to the missing feature description dimension through the target screen. For example, the vehicle missing feature information may be converted into a corresponding index feature, and the index feature may be added to the first vehicle description information to obtain the optimized second vehicle description information.
Based on the above, in step S300, the determining, according to the second vehicle description information, at least one drone in the set monitoring area as a target drone may include:
and determining at least one unmanned aerial vehicle feeding back the at least one target monitoring picture as the target unmanned aerial vehicle according to the at least one target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information. For example, the target unmanned aerial vehicle is determined according to the unmanned aerial vehicle identification information carried by the target monitoring picture.
The target vehicle is accurately identified through an artificial intelligence model (a monitoring end vehicle identification model). The embodiment of the invention also provides an independently implemented model training method aiming at the monitoring end vehicle identification model. In detail, the model training method is exemplarily described as follows.
(1) In this embodiment, the training sample library includes a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes. The calibrated vehicle description index may be a vehicle description index corresponding to vehicle information calibrated in advance according to the vehicle information in the sample vehicle monitoring screen in a machine or manually determined manner, and may specifically include vehicle index features corresponding to a plurality of target feature description dimensions. In this embodiment, the description index may refer to a data carrier obtained by a preset index format and including index features, and is used for carrying or recording various data contents or data features required by the embodiment of the present invention.
(2) Obtaining a predetermined neural network model, wherein the neural network model comprises a convolution network layer, a feature conversion layer and a result output layer.
In this embodiment, before the model training, a neural network model may be predetermined according to the needs of the actual application scenario to perform the model training, for example, the neural network may be a convolutional neural network, a cyclic convolutional neural network, a residual neural network, and the like, and the specific details are not limited.
(3) And for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the convolution network layer.
In this embodiment, the vehicle feature information under each target feature description dimension may be sequentially extracted for the sample vehicle monitoring screen by the convolutional network layer. Alternatively, the convolutional network layer may include a plurality of convolutional kernels, each convolutional kernel being used for correspondingly extracting the vehicle feature information in at least one target feature description dimension.
(4) And performing feature conversion on the vehicle feature information under each target feature description dimension through the feature conversion layer to obtain an index feature corresponding to the vehicle feature information under each target feature description dimension.
Alternatively, in this embodiment, the vehicle characteristic information may be subjected to characteristic conversion in a preset characteristic conversion manner, for example, may be converted into a digitized characteristic. Taking the body color as an example, the body color can be converted into three primary color values between 0 and 255 to represent the corresponding body color.
(5) And obtaining a training vehicle description index according to the index features corresponding to the vehicle feature information under the target feature description dimension through the result output layer. For example, the training vehicle description index may be obtained by expressing the index features corresponding to the vehicle feature information in the target feature description dimension in a pre-information index expression manner.
(6) And calculating to obtain a first loss function value according to the training vehicle description index and the calibration vehicle description index.
In this embodiment, for example, the first loss function value may be calculated according to a matching degree between an index feature corresponding to each target feature description dimension in the training vehicle description index and a calibration index feature corresponding to each target feature description dimension in the calibration vehicle description index. In other words, in this embodiment, the loss function value may be used to represent a matching degree between an index feature corresponding to each target feature description dimension in the training vehicle description index and a calibration index feature corresponding to each target feature description dimension in the calibration vehicle description index.
(7) And performing iterative optimization on the model parameters of the neural network model according to the first loss function value until the first loss function value meets a first convergence condition, and obtaining the trained neural network model as the monitoring end vehicle identification model.
In this embodiment, the first loss function value is obtained by calculating a first matching degree between each index feature in each training vehicle description index and each corresponding index feature in the calibration vehicle description index, and the first convergence condition includes that the first matching degree represented by the first loss function value reaches a first preset matching degree threshold.
Further, in a possible implementation manner, in order to facilitate each unmanned aerial vehicle to perform preliminary monitoring and identification on a target vehicle, an embodiment of the present invention further provides a method for training and issuing an airborne vehicle identification model used at an unmanned aerial vehicle end, which is described in detail below.
(11) And acquiring a training sample library, wherein the training sample library comprises a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes. In this embodiment, the training sample library may be the same as a sample library used for training the monitoring-end vehicle recognition model.
(12) And obtaining a predetermined neural network model, and carrying out model compression processing on the neural network model to obtain a compressed neural network model.
In one possible implementation, the model compression process may be implemented, for example, by parameter pruning and sharing, low rank decomposition, migration/compression convolution filtering and, knowledge refinement or knowledge distillation. After the model compression processing, the AI model obtained by the subsequent training is more suitable for the operation at the unmanned aerial vehicle end.
(13) And for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the neural network model, and calculating to obtain a second loss function value according to the vehicle feature information under the target feature description dimensions and index features included in the calibrated vehicle description index.
In this embodiment, for example, the second loss function value may be calculated according to a matching degree between an index feature corresponding to each target feature description dimension in the training vehicle description index and a calibration index feature corresponding to each target feature description dimension in the calibration vehicle description index. In other words, in this embodiment, the second loss function value may be used to characterize a matching degree between the index feature corresponding to each target feature description dimension in the training vehicle description index and the calibration index feature corresponding to each target feature description dimension in the calibration vehicle description index.
(14) And performing iterative optimization on the model parameters of the compressed neural network model according to the second loss function value until the second loss function value meets a second convergence condition, and obtaining a trained neural network model as an airborne vehicle identification model.
In this embodiment, the second loss function value is obtained by calculating a second matching degree of the vehicle feature information under each target feature description dimension and each corresponding index feature in the calibrated vehicle description index, where the second convergence condition includes that the second matching degree represented by the second loss function value reaches a second preset matching degree threshold, and the second preset matching degree threshold is smaller than the first preset matching degree threshold.
(15) And issuing the airborne vehicle identification model to each unmanned aerial vehicle, so that the unmanned aerial vehicles identify the target vehicles of the vehicles in the set monitoring area according to the airborne vehicle identification model, and feeding back the preliminary identification result to the unmanned aerial vehicle control center.
Therefore, after the pre-trained airborne vehicle identification model is issued to the unmanned aerial vehicle, the unmanned aerial vehicle can perform vehicle identification on the vehicle pictures monitored in real time according to the airborne vehicle identification model, and the suspected monitoring video pictures which are possibly found to be target vehicles are fed back to the unmanned aerial vehicle monitoring center through the preliminary identification result.
In this embodiment, in order to make the onboard vehicle identification model and the monitoring end vehicle identification model obtained through the training have better identification performance and operation effect. In the embodiment, regarding the manner of obtaining the training sample library, the invention is creatively obtained in the following manner, which is exemplarily described as follows.
(111) And obtaining vehicle monitoring pictures under a set scene through a plurality of unmanned aerial vehicles to obtain a plurality of vehicle monitoring pictures. In this embodiment, can follow the vehicle monitoring picture that has the vehicle and extract the vehicle monitoring picture that accords with sample training condition as the training sample through the control picture that unmanned aerial vehicle feedbacks under the condition of control work at ordinary times, can make the model more match actual application scenario after the sample that draws trains the model.
(112) And storing each vehicle monitoring picture as a vehicle monitoring picture sample into a preset sample database.
(113) And extracting the vehicle characteristic information of each vehicle monitoring picture sample in the sample database under a plurality of target characteristic description dimensions to obtain a characteristic extraction result corresponding to each vehicle monitoring picture sample. Wherein the feature extraction result may include at least one piece of vehicle feature information of the vehicle in the vehicle monitoring screen sample.
(114) And according to the feature extraction result corresponding to each vehicle monitoring picture sample, performing sample filtration on the vehicle monitoring picture samples in the sample database to obtain a filtered sample database. In this embodiment, in order to avoid that the model training effect is affected by too much missing vehicle features in the vehicle monitoring picture samples, in this embodiment, sample filtering needs to be performed on the vehicle monitoring picture samples in the sample database according to the feature extraction result, so as to filter out vehicle monitoring picture samples that do not meet the condition.
Alternatively, in (114), the sample filtering is performed on the vehicle monitoring screen samples in the sample database according to the feature extraction result corresponding to each vehicle monitoring screen sample, and the obtained filtered sample database may include:
determining whether feature loss exists in the feature extraction result corresponding to each vehicle monitoring picture sample;
if the characteristics are missing, deleting the vehicle monitoring picture sample from the training sample library;
the characteristic missing comprises that the characteristic extraction result corresponding to the vehicle monitoring picture sample lacks vehicle characteristic information under a predetermined target characteristic description dimension or lacks vehicle characteristic information under a preset number of target characteristic description dimensions. For example, the predetermined target feature description dimension is a body color dimension, or the absence of vehicle feature information in two or more target feature description dimensions is considered as the presence of feature absence.
(115) And obtaining a calibration vehicle description index corresponding to each vehicle monitoring picture sample according to the feature extraction result corresponding to each vehicle monitoring picture sample in the filtered sample database, and performing associated storage on the vehicle description index and the vehicle monitoring picture sample in the sample database to obtain the training sample database.
Further, since the on-board vehicle identification model and/or the monitoring-end vehicle identification model after model training needs to identify the target vehicle according to the vehicle description information lacking at least part of features, in order to make the trained model have better identifiability and distinctiveness, the obtained training sample library may further include training samples lacking features. Based on the above, the step of obtaining the training sample library may further include the following innovative steps.
Firstly, copying a part of vehicle monitoring picture samples in the training sample library as samples to be processed.
Secondly, performing feature fuzzy processing on the index features corresponding to at least one target feature description dimension in the vehicle description index corresponding to the sample to be processed, wherein the feature fuzzy processing comprises replacing or deleting the corresponding index features with the set fuzzy features.
And then, adding the to-be-processed sample after the characteristic fuzzy processing as an extended training sample into the training sample library, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library.
Alternatively, taking the number of the target feature description dimensions as N as an example, the feature blurring processing on the index feature corresponding to at least one feature description dimension in the vehicle description index corresponding to the to-be-processed sample may be implemented by:
firstly, adding the sample to be processed into a pre-established sample sequence to obtain a sample sequence to be processed;
secondly, determining the number of samples required for feature fuzzy processing of each target feature description dimension;
thirdly, according to the number of samples needing to be subjected to feature fuzzy processing in the ith target feature description dimension, obtaining a corresponding number of samples to be processed from the sample sequence to be processed, and performing feature fuzzy processing on the index feature corresponding to the ith target feature description dimension corresponding to the samples to be processed to obtain an ith extended sample sequence; wherein i is a natural number of not less than 1 and not more than N.
On the basis of the above, adding the to-be-processed sample after the feature fuzzy processing as an extended training sample into the training sample library, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library, includes:
and sequentially adding the obtained ith expansion sample sequence into the training sample library, and performing sample disordering treatment on the training sample library after the Nth expansion sample sequence is added into the training sample library.
Alternatively, in another possible implementation, the feature blurring process may be performed on the index features of at least two target feature description dimensions at the same time. Based on this, the performing the feature fuzzy processing on the index feature corresponding to at least one target feature description dimension in the vehicle description index corresponding to the to-be-processed sample to obtain the training sample library may further include the following:
firstly, determining at least one preset dimension combination which is obtained by combining at least two target feature description dimensions and corresponds to the feature fuzzy processing;
then, for each dimension combination, at least one corresponding sample to be processed is obtained from the samples to be processed, and multi-feature fuzzy processing is performed on the index features, corresponding to the target feature description dimensions included in the dimension combination, in the obtained samples to be processed.
For example, the index features under the at least two corresponding target feature description dimensions in the dimension combination may be respectively subjected to a blurring process (e.g., replaced by blank features), or the corresponding index features may be respectively deleted.
Further, in the step S200, to avoid that the subsequent drone control center accurately identifies the target vehicle through the AI model on some useless vehicle monitoring pictures, which results in the waste of computing resources and reduces the identification efficiency. In the embodiment of the invention, vehicle monitoring pictures generated in the monitoring area of the unmanned aerial vehicle, through which the target vehicle is almost impossible to pass currently, can be filtered according to the time and position information respectively corresponding to the first vehicle description information and the preliminary identification result fed back by each unmanned aerial vehicle.
Based on the above inventive concept, in this embodiment, the first vehicle description information further includes first time-space domain information for the target vehicle, and the preliminary identification result further includes second time-space domain information corresponding to the surveillance video frame fed back by each unmanned aerial vehicle. The first time-space domain information may include first time information (e.g., a generation time of the first vehicle description information) and first position information (e.g., a current position of the target vehicle) corresponding to the target vehicle or the first vehicle description information. The second time-space domain information may be second time information (generation time of the surveillance video picture) corresponding to the surveillance video picture fed back by the unmanned aerial vehicle and corresponding second position information (generation position of the surveillance video picture).
Based on this, in step S200, the receiving of the preliminary identification result fed back by each drone in the preliminary identification process, and obtaining the second vehicle description information of the target vehicle according to the preliminary identification result may include the following steps.
Step S2011, performing confidence decision on each unmanned aerial vehicle according to the first time-space domain information and the second time-space domain information to obtain confidence parameters between each unmanned aerial vehicle and the target vehicle.
For example, first time information and first position information corresponding to the target vehicle may be first obtained according to the first time-space domain information;
then, second time information and second position information corresponding to the monitoring video pictures fed back by the unmanned aerial vehicles are obtained according to the second time-space domain information;
secondly, calling an electronic map corresponding to the set monitoring area, and determining a feasible path and corresponding prediction time of the target vehicle from a first position corresponding to the first position information to a second position corresponding to the second position information according to the electronic map;
and finally, determining confidence coefficient parameters corresponding to the target vehicle and each unmanned aerial vehicle according to the predicted time and the time interval between the first time information and the second time information.
As an alternative, for example, a corresponding confidence parameter can be determined as a function of the time difference between the prediction time and the time interval. For example, if it is predetermined that the time interval is at most 10 minutes apart, and the time difference is 2 minutes, then the confidence parameter is (10 "2)/10 = 0.8. If the difference is 1 minute, (10-1)/10 = 0.9. The preset confidence threshold may be, for example, 0.8, which may be determined according to actual situations and is not limited herein.
Step S2012, filtering the monitoring video frames fed back by the unmanned aerial vehicle whose corresponding confidence parameter is less than the preset confidence threshold, to obtain a monitoring frame sequence to be decided.
And step S2013, obtaining second vehicle description information of the target vehicle according to each monitoring video picture in the monitoring picture sequence to be decided.
For example, in step S2013, the second vehicle description information may be obtained by:
calling a monitoring end vehicle identification model aiming at each monitoring video picture in the monitoring picture sequence to be decided, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
and acquiring the vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain the second vehicle description information.
As shown in fig. 3, it is a schematic structural diagram of the drone control center 100 according to the embodiment of the present invention. In this embodiment, the drone control center 100 may include a drone smart vehicle identification system 110, a machine-readable storage medium 120, and a processor 130.
In this embodiment, the machine-readable storage medium 120 and the processor 130 may be located in the drone control center 100 and located separately. The machine-readable storage medium 120 may also be separate from the drone control center 100 and accessed by the processor 130. The drone smart vehicle identification system 110 may include a plurality of functional modules stored on the machine-readable storage medium 120, such as various software functional modules included with the drone smart vehicle identification system 110. When the processor 130 executes the software function modules in the unmanned aerial vehicle intelligent vehicle identification system 110, the block chain big data processing method provided by the foregoing method embodiment is implemented.
In this embodiment, the drone control center 100 may include one or more processors 130. Processor 130 may process information and/or data related to the service request to perform one or more of the functions described in this disclosure. In some embodiments, processor 130 may include one or more processing engines (e.g., a single-core processor or a multi-core processor). For example only, the processor 130 may include one or more hardware processors, such as one of a central processing unit CPU, an application specific integrated circuit ASIC, an application specific instruction set processor ASIP, a graphics processor GPU, a physical arithmetic processing unit PPU, a digital signal processor DSP, a field programmable gate array FPGA, a programmable logic device PLD, a controller, a microcontroller unit, a reduced instruction set computer RISC, a microprocessor, or the like, or any combination thereof.
Machine-readable storage medium 120 may store data and/or instructions. In some embodiments, the machine-readable storage medium 120 may store data or material obtained from the drone 200. In some embodiments, the machine-readable storage medium 120 may store data and/or instructions for execution or use by the drone control center 100 by which the drone control center 100 may execute or use to implement the example methods described herein. In some embodiments, the machine-readable storage medium 120 may include mass storage, removable storage, volatile read-write memory, read-only memory, ROM, the like, or any combination thereof, to name a few. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memories may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read-write memories may include random access memory RAM. Exemplary random access memories may include dynamic RAM, double-rate synchronous dynamic RAM, static RAM, thyristor RAM, zero-capacitance RAM, and the like. Exemplary ROMs may include masked ROMs, programmable ROMs, erasable programmable ROMs, electrically erasable programmable ROMs, compact disk ROMs, digital versatile disk ROMs, and the like.
As shown in fig. 4, which is a functional block diagram of the unmanned aerial vehicle intelligent vehicle identification system 110 shown in fig. 3, in this embodiment, the unmanned aerial vehicle intelligent vehicle identification system 110 may include a first identification module 1101, a second identification module 1102, a target determination module 1103, and a tracking monitoring module 1104.
The first identification module 1101 is configured to receive first vehicle description information of a target vehicle, and send a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the vehicle description information, so that each unmanned aerial vehicle performs initial identification of the target vehicle in a corresponding area, where the first vehicle description information includes a first vehicle description index.
The second identification module 1102 is configured to receive a preliminary identification result fed back by each unmanned aerial vehicle in a preliminary identification process, and obtain second vehicle description information of the target vehicle according to the preliminary identification result, where the preliminary identification result includes at least one monitoring video frame whose matching degree with the first vehicle description information reaches a first preset matching degree;
the target determining module 1103 is configured to determine, according to the second vehicle description information, at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle; and
and the tracking and monitoring module 1104 is configured to send a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information, so that the target unmanned aerial vehicle performs real-time tracking, monitoring and positioning on the target vehicle.
In this embodiment, the second identifying module 1102 is specifically configured to:
acquiring a monitoring video picture, of which the matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle;
calling a monitoring end vehicle identification model aiming at each monitoring video picture, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
and acquiring the vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain the second vehicle description information.
The target determining module 1103 is specifically configured to: and determining the unmanned aerial vehicle feeding back the target monitoring picture as the target unmanned aerial vehicle according to the target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information.
It should be noted that, the first identifying module 1101, the second identifying module 1102, the target determining module 1103, and the tracking monitoring module 1104 may respectively perform steps S100 to S400 of the method embodiment, and detailed descriptions of these modules may further participate in the specific contents of the corresponding steps, which is not repeated herein.
In summary, in the embodiment of the present invention, the first vehicle description information of the target vehicle lacking some features is sent to each unmanned aerial vehicle, each unmanned aerial vehicle performs primary identification on the target vehicle based on the first vehicle description information, and feeds back the primary identification result to the unmanned aerial vehicle control center, and then the unmanned aerial vehicle control center performs secondary identification according to the primary identification result, so as to accurately identify the target vehicle, and further obtain the second vehicle description information of the target vehicle (without lacking features). Therefore, the corresponding target unmanned aerial vehicle can be determined to perform real-time tracking identification and positioning on the target vehicle according to the second vehicle description information. In addition, in this embodiment, vehicle identification may be performed on the target vehicle in an artificial intelligence manner. In detail, can be used for realizing aiming at through set up on unmanned aerial vehicle machine carries vehicle identification model the preliminary discernment of target vehicle to set up control end vehicle identification model at unmanned aerial vehicle control center, carry out the accurate discernment of secondary to the target vehicle to the control video picture that includes in the preliminary discernment result of unmanned aerial vehicle feedback, in order to determine the target vehicle that needs to follow tracks of control discernment. Wherein the on-board vehicle identification model may be a scaled-down version of the monitoring-end vehicle identification model to be suitable for implementation on an unmanned aerial vehicle. For example, the on-board vehicle identification model may be obtained by performing model compression training on the basis of an original network model of the monitoring-end vehicle identification model. Therefore, accurate identification and tracking monitoring of the target vehicle can be further realized in an artificial intelligence model mode.
The embodiments described above are only a part of the embodiments of the present invention, and not all of them. The components of embodiments of the present invention generally described and illustrated in the figures can be arranged and designed in a wide variety of different configurations. Therefore, the detailed description of the embodiments of the present invention provided in the drawings is not intended to limit the scope of the present invention, but is merely representative of selected embodiments of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims. Moreover, all other embodiments that can be made available by a person skilled in the art without inventive step based on the embodiments of the present invention shall fall within the scope of protection of the present invention.

Claims (10)

1. An unmanned aerial vehicle intelligent vehicle identification method based on city management is applied to an unmanned aerial vehicle control center, and the method comprises the following steps:
receiving first vehicle description information of a target vehicle, and sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the first vehicle description information to enable each unmanned aerial vehicle to perform primary identification on the target vehicle in a corresponding area, wherein the first vehicle description information comprises a first vehicle description index, and the first vehicle description index lacks an index feature corresponding to at least one target feature description dimension in a plurality of preset target feature description dimensions of the target vehicle;
receiving a primary identification result fed back by each unmanned aerial vehicle in a primary identification process, and obtaining second vehicle description information of the target vehicle according to the primary identification result, wherein the primary identification result comprises at least one monitoring video picture of which the matching degree with the first vehicle description information reaches a first preset matching degree;
determining at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle according to the second vehicle description information;
and sending a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information, so that the target unmanned aerial vehicle carries out real-time tracking, monitoring and positioning on the target vehicle.
2. The unmanned aerial vehicle intelligent vehicle identification method based on city management as claimed in claim 1, wherein the obtaining of the second vehicle description information of the target vehicle according to the preliminary identification result comprises:
acquiring a monitoring video picture, of which the matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle;
calling a monitoring end vehicle identification model aiming at each monitoring video picture, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
acquiring vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain second vehicle description information;
the determining, according to the second vehicle description information, that at least one unmanned aerial vehicle in the set monitoring area is a target unmanned aerial vehicle includes:
and determining at least one unmanned aerial vehicle feeding back the at least one target monitoring picture as the target unmanned aerial vehicle according to the at least one target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information.
3. The unmanned aerial vehicle intelligent vehicle identification method based on city management as claimed in claim 2, wherein the method further comprises a step of performing model training on the monitoring-end vehicle identification model, the step comprising:
acquiring a training sample library, wherein the training sample library comprises a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes;
acquiring a predetermined neural network model, wherein the neural network model comprises a convolution network layer, a feature conversion layer and a result output layer;
for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the convolution network layer;
performing feature conversion on the vehicle feature information under each target feature description dimension through the feature conversion layer to obtain an index feature corresponding to the vehicle feature information under each target feature description dimension;
obtaining a training vehicle description index according to the index features corresponding to the vehicle feature information under the target feature description dimension through the result output layer;
calculating to obtain a first loss function value according to the training vehicle description index and the calibration vehicle description index;
iteratively optimizing the model parameters of the neural network model according to the first loss function value until the first loss function value meets a first convergence condition, and obtaining a trained neural network model as the monitoring end vehicle recognition model; the first loss function value is obtained by calculating a first matching degree of each index feature in each training vehicle description index and each index feature corresponding to the calibration vehicle description index, and the first convergence condition includes that the first matching degree represented by the first loss function value reaches a first preset matching degree threshold value.
4. The unmanned aerial vehicle intelligent vehicle identification method based on city management as claimed in claim 2, wherein the method further comprises:
acquiring a training sample library, wherein the training sample library comprises a plurality of sample vehicle monitoring pictures carrying calibrated vehicle description indexes;
obtaining a predetermined neural network model, and carrying out model compression processing on the neural network model to obtain a compressed neural network model;
for each sample vehicle monitoring picture, obtaining vehicle feature information of the sample vehicle monitoring picture under a plurality of target feature description dimensions through the neural network model, and calculating to obtain a second loss function value according to the vehicle feature information under the target feature description dimensions and index features included in the calibrated vehicle description index;
iteratively optimizing the model parameters of the compressed neural network model according to the second loss function value until the second loss function value meets a second convergence condition, and obtaining a trained neural network model as an airborne vehicle identification model; the second loss function value is obtained by calculating a second matching degree of the vehicle feature information under each target feature description dimension and each corresponding index feature in the calibrated vehicle description index, the second convergence condition includes that the second matching degree represented by the second loss function value reaches a second preset matching degree threshold value, and the second preset matching degree threshold value is smaller than the first preset matching degree threshold value;
and issuing the airborne vehicle identification model to each unmanned aerial vehicle, so that the unmanned aerial vehicles identify the target vehicles of the vehicles in the set monitoring area according to the airborne vehicle identification model, and feeding back the preliminary identification result to the unmanned aerial vehicle control center.
5. The unmanned aerial vehicle intelligent vehicle identification method based on city management according to claim 3 or 4, wherein the obtaining of the training sample library comprises:
obtaining vehicle monitoring pictures in a set scene through a plurality of unmanned aerial vehicles to obtain a plurality of vehicle monitoring pictures;
storing each vehicle monitoring picture as a vehicle monitoring picture sample into a preset sample database;
extracting vehicle feature information of each vehicle monitoring picture sample in the sample database under a plurality of target feature description dimensions to obtain a feature extraction result corresponding to each vehicle monitoring picture sample;
according to the feature extraction result corresponding to each vehicle monitoring picture sample, sample filtering is carried out on the vehicle monitoring picture samples in the sample database to obtain a filtered sample database;
obtaining a calibration vehicle description index corresponding to each vehicle monitoring picture sample according to a feature extraction result corresponding to each vehicle monitoring picture sample in a filtered sample database, and performing associated storage on the vehicle description index and the vehicle monitoring picture sample in the sample database to obtain a training sample database;
according to the feature extraction result corresponding to each vehicle monitoring picture sample, sample filtering is carried out on the vehicle monitoring picture samples in the sample database, and the obtained filtered sample database comprises the following steps:
determining whether feature loss exists in the feature extraction result corresponding to each vehicle monitoring picture sample;
if the characteristics are missing, deleting the vehicle monitoring picture sample from the training sample library;
the characteristic missing comprises that the characteristic extraction result corresponding to the vehicle monitoring picture sample lacks vehicle characteristic information under a predetermined target characteristic description dimension or lacks vehicle characteristic information under a preset number of target characteristic description dimensions.
6. The unmanned aerial vehicle intelligent vehicle identification method based on city management as claimed in claim 5, wherein the obtaining of the training sample library further comprises:
copying a part of vehicle monitoring picture samples in the training sample library as samples to be processed;
performing feature fuzzy processing on index features corresponding to at least one target feature description dimension in the vehicle description index corresponding to the sample to be processed, wherein the feature fuzzy processing comprises replacing or deleting the corresponding index features with the set fuzzy features;
adding the to-be-processed sample after the characteristic fuzzy processing as an extended training sample into the training sample library, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library;
the method for processing the vehicle description indexes comprises the following steps of:
adding the sample to be processed into a pre-established sample sequence to obtain a sample sequence to be processed;
determining the number of samples required for feature fuzzy processing of each target feature description dimension;
acquiring a corresponding number of samples to be processed from the sample sequence to be processed according to the number of samples required to be subjected to feature fuzzy processing by the ith target feature description dimension, and performing feature fuzzy processing on the index feature corresponding to the ith target feature description dimension corresponding to the samples to be processed to obtain an ith extended sample sequence; wherein i is a natural number which is more than or equal to 1 and less than or equal to N;
adding the to-be-processed sample after the characteristic fuzzy processing into the training sample library as an extended training sample, and performing sample disordering processing on the training sample library added with the extended training sample to obtain an optimized training sample library, including:
and sequentially adding the obtained ith expansion sample sequence into the training sample library, and performing sample disordering treatment on the training sample library after the Nth expansion sample sequence is added into the training sample library.
7. The unmanned aerial vehicle intelligent vehicle identification method based on city management according to claim 6, wherein the training sample library is obtained by performing feature fuzzy processing on the index features corresponding to at least one target feature description dimension in the vehicle description indexes corresponding to the samples to be processed, further comprising:
determining at least one preset dimension combination which is obtained by combining at least two target feature description dimensions and corresponds to the feature fuzzy processing;
and aiming at each dimension combination, acquiring at least one corresponding sample to be processed from the samples to be processed, and performing multi-feature fuzzy processing on the index features, corresponding to the target feature description dimensions, in the acquired samples to be processed.
8. The unmanned aerial vehicle intelligent vehicle identification method based on city management as claimed in claim 1, wherein the first vehicle description information further includes first time-space domain information for the target vehicle, and the preliminary identification result further includes second time-space domain information corresponding to a surveillance video picture fed back by each unmanned aerial vehicle; the receiving of the preliminary identification result fed back by each unmanned aerial vehicle in the preliminary identification process and the obtaining of the second vehicle description information of the target vehicle according to the preliminary identification result include:
performing confidence decision on each unmanned aerial vehicle according to the first time-space domain information and the second time-space domain information to obtain confidence parameters between each unmanned aerial vehicle and the target vehicle;
filtering the monitoring video pictures fed back by the unmanned aerial vehicle with the corresponding confidence coefficient parameters smaller than a preset confidence coefficient threshold value to obtain a monitoring picture sequence to be decided;
obtaining second vehicle description information of the target vehicle according to each monitoring video picture in the monitoring picture sequence to be decided;
wherein the performing a confidence decision for each of the unmanned aerial vehicles according to the first time-space domain information and the second time-space domain information comprises:
acquiring first time information and first position information corresponding to the target vehicle according to the first time-space domain information;
acquiring second time information and second position information corresponding to the monitoring video pictures fed back by the unmanned aerial vehicles according to the second time-space domain information;
calling an electronic map corresponding to the set monitoring area, and determining a feasible path and corresponding prediction time for the target vehicle to reach a second position corresponding to the second position information from a first position corresponding to the first position information according to the electronic map;
and determining confidence coefficient parameters corresponding to the target vehicle and each unmanned aerial vehicle according to the predicted time and the time interval of the first time information and the second time information.
9. The utility model provides an unmanned aerial vehicle intelligent vehicle identification system based on city management, its characterized in that is applied to unmanned aerial vehicle control center, the system includes:
the first identification module is used for receiving first vehicle description information of a target vehicle, sending a vehicle identification instruction to each unmanned aerial vehicle in a set monitoring area according to the vehicle description information, and enabling each unmanned aerial vehicle to perform primary identification on the target vehicle in a corresponding area, wherein the first vehicle description information comprises a first vehicle description index;
the second identification module is used for receiving a primary identification result fed back by each unmanned aerial vehicle in a primary identification process and obtaining second vehicle description information of the target vehicle according to the primary identification result, wherein the primary identification result comprises at least one monitoring video picture of which the matching degree with the first vehicle description information reaches a first preset matching degree;
the target determining module is used for determining at least one unmanned aerial vehicle in the set monitoring area as a target unmanned aerial vehicle according to the second vehicle description information; and
and the tracking and monitoring module is used for sending a tracking and monitoring instruction to the target unmanned aerial vehicle according to the second vehicle description information so that the target unmanned aerial vehicle can perform real-time tracking, monitoring and positioning on the target vehicle.
10. The unmanned aerial vehicle intelligent vehicle identification system based on city management of claim 9, wherein the second identification module is specifically configured to:
acquiring a monitoring video picture, of which the matching degree with the first vehicle description information fed back by each unmanned aerial vehicle reaches a first preset matching degree, from the preliminary identification result fed back by each unmanned aerial vehicle;
calling a monitoring end vehicle identification model aiming at each monitoring video picture, and extracting vehicle characteristic information under each characteristic description dimension from the monitoring video pictures through a convolution network layer which is included in the monitoring end vehicle identification model and corresponds to a plurality of characteristic description dimensions respectively;
performing feature conversion on the vehicle feature information under each feature description dimension through a feature conversion layer included by the monitoring end vehicle identification model to obtain an index feature corresponding to the vehicle feature information under each feature description dimension;
calculating the matching degree of the monitoring video picture and the first vehicle description information according to the index features corresponding to the vehicle feature information under the feature description dimensionality and the first vehicle description index corresponding to the first vehicle description information through a result output layer included by the monitoring end vehicle identification model, and determining the monitoring video picture as a target picture if the matching degree reaches a second preset matching degree, wherein the second preset matching degree is greater than the first preset matching degree;
acquiring a preset global feature description dimension sequence, wherein the global feature description dimension sequence comprises a plurality of target feature description dimensions for the target vehicle;
determining a missing feature description dimension for the target vehicle from the sequence of global feature description dimensions and a first vehicle description index of the first vehicle description information;
acquiring vehicle missing feature information under the missing feature description dimension from the target picture, and optimizing the first vehicle description information according to the vehicle missing feature information to obtain second vehicle description information;
the target determination module is specifically configured to: and determining the unmanned aerial vehicle feeding back the target monitoring picture as the target unmanned aerial vehicle according to the target monitoring picture corresponding to the vehicle missing characteristic information in the second vehicle description information.
CN202111051756.5A 2021-09-08 2021-09-08 Unmanned aerial vehicle intelligent vehicle identification method and system based on city management Active CN113516106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111051756.5A CN113516106B (en) 2021-09-08 2021-09-08 Unmanned aerial vehicle intelligent vehicle identification method and system based on city management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111051756.5A CN113516106B (en) 2021-09-08 2021-09-08 Unmanned aerial vehicle intelligent vehicle identification method and system based on city management

Publications (2)

Publication Number Publication Date
CN113516106A true CN113516106A (en) 2021-10-19
CN113516106B CN113516106B (en) 2021-12-10

Family

ID=78063030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111051756.5A Active CN113516106B (en) 2021-09-08 2021-09-08 Unmanned aerial vehicle intelligent vehicle identification method and system based on city management

Country Status (1)

Country Link
CN (1) CN113516106B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347482A1 (en) * 2009-02-20 2014-11-27 Appareo Systems, Llc Optical image monitoring system and method for unmanned aerial vehicles
CN109445465A (en) * 2018-10-17 2019-03-08 深圳市道通智能航空技术有限公司 Method for tracing, system, unmanned plane and terminal based on unmanned plane
CN110874578A (en) * 2019-11-15 2020-03-10 北京航空航天大学青岛研究院 Unmanned aerial vehicle visual angle vehicle identification and tracking method based on reinforcement learning
CN112863186A (en) * 2021-01-18 2021-05-28 南京信息工程大学 Vehicle-mounted unmanned aerial vehicle-based escaping vehicle rapid identification and tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347482A1 (en) * 2009-02-20 2014-11-27 Appareo Systems, Llc Optical image monitoring system and method for unmanned aerial vehicles
CN109445465A (en) * 2018-10-17 2019-03-08 深圳市道通智能航空技术有限公司 Method for tracing, system, unmanned plane and terminal based on unmanned plane
CN110874578A (en) * 2019-11-15 2020-03-10 北京航空航天大学青岛研究院 Unmanned aerial vehicle visual angle vehicle identification and tracking method based on reinforcement learning
CN112863186A (en) * 2021-01-18 2021-05-28 南京信息工程大学 Vehicle-mounted unmanned aerial vehicle-based escaping vehicle rapid identification and tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭博 等: "基于改进Faster R-CNN的无人机视频车辆自动检测", 《东南大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN113516106B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US10636169B2 (en) Synthesizing training data for broad area geospatial object detection
US20230084869A1 (en) System for simplified generation of systems for broad area geospatial object detection
CN108764164B (en) Face detection method and system based on deformable convolution network
CN108764456B (en) Airborne target identification model construction platform, airborne target identification method and equipment
CN111723728A (en) Pedestrian searching method, system and device based on bidirectional interactive network
CN113869598B (en) Intelligent remote management method, system and cloud platform for unmanned aerial vehicle based on smart city
CN113298042B (en) Remote sensing image data processing method and device, storage medium and computer equipment
CN113516106B (en) Unmanned aerial vehicle intelligent vehicle identification method and system based on city management
CN113901037A (en) Data management method, device and storage medium
CN112115996B (en) Image data processing method, device, equipment and storage medium
CN111897864B (en) Expert database data extraction method and system based on Internet AI outbound
CN112784008B (en) Case similarity determining method and device, storage medium and terminal
CN113744280A (en) Image processing method, apparatus, device and medium
CN117475253A (en) Model training method and device, electronic equipment and storage medium
CN114550107B (en) Bridge linkage intelligent inspection method and system based on unmanned aerial vehicle cluster and cloud platform
Neupane et al. A literature review of computer vision techniques in wildlife monitoring
CN112182413B (en) Intelligent recommendation method and server based on big teaching data
Cruz et al. Detection and segmentation of Ecuadorian deforested tropical areas based on color mean and deviation
CN113412493A (en) Inference engine-based computing resource allocation method and device and computer equipment
CN113537602B (en) Vehicle behavior prediction method, device, equipment and medium
CN116737814B (en) Rapid integration method and system based on multi-source heterogeneous big data fusion
CN113837863B (en) Business prediction model creation method and device and computer readable storage medium
CN114660605B (en) SAR imaging processing method and device for machine learning and readable storage medium
CN114625624B (en) Data processing method and system combined with artificial intelligence and cloud platform
CN117011616B (en) Image content auditing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant