CN113688805A - Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system - Google Patents

Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system Download PDF

Info

Publication number
CN113688805A
CN113688805A CN202111242342.0A CN202111242342A CN113688805A CN 113688805 A CN113688805 A CN 113688805A CN 202111242342 A CN202111242342 A CN 202111242342A CN 113688805 A CN113688805 A CN 113688805A
Authority
CN
China
Prior art keywords
vehicle
model
target
license plate
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111242342.0A
Other languages
Chinese (zh)
Other versions
CN113688805B (en
Inventor
杨翰翔
杨德润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lianhe Intelligent Technology Co ltd
Original Assignee
Shenzhen Lianhe Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lianhe Intelligent Technology Co ltd filed Critical Shenzhen Lianhe Intelligent Technology Co ltd
Priority to CN202111242342.0A priority Critical patent/CN113688805B/en
Publication of CN113688805A publication Critical patent/CN113688805A/en
Application granted granted Critical
Publication of CN113688805B publication Critical patent/CN113688805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method and a system for identifying a unlicensed muck vehicle based on an unmanned aerial vehicle, and relates to the technical field of road monitoring. The method comprises the steps of constructing a three-dimensional model of a target vehicle through vehicle images shot at multiple angles, identifying the type of the target vehicle based on the three-dimensional model of the target vehicle, and ensuring the accuracy of type identification of the target vehicle relative to the identification of the target vehicle in a two-dimensional image. In addition, when the target vehicle is a muck vehicle, the license plate detection is carried out on the vehicle images in a plurality of position directions, so that the problem of missing detection of the license plate is avoided. After the license plate is not detected, the vehicle image and the vehicle location are sent to the communication terminal, so that a patrol police holding the communication terminal can conveniently carry out on-site processing in time, serious traffic accidents caused by the unlicensed muck vehicle are avoided, and the traffic safety of traffic participants is ensured.

Description

Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system
Technical Field
The application relates to the technical field of road monitoring, in particular to a method and a system for identifying a unlicensed muck vehicle based on an unmanned aerial vehicle.
Background
The muck truck is also called a soil pulling truck or a slag transport truck, and is a truck for transporting building materials such as sand and stone, the volume of the muck truck is large, a cab is high, a plurality of visual blind areas exist, and meanwhile, accidents are easily caused due to the fact that the inner wheel is poor during turning.
In order to avoid electronic monitoring of roads, muck vehicle drivers generally go on the roads without license plates or adopt ways of shielding license plates and the like, which brings great potential safety hazards to other traffic participants (such as cars, pedestrians and the like).
Disclosure of Invention
In order to overcome at least the defects in the prior art, the application aims to provide a method and a system for identifying a unlicensed muck vehicle based on an unmanned aerial vehicle, and the method comprises the steps of firstly, obtaining vehicle images of a target vehicle shot by the unmanned aerial vehicle in multiple position directions; then, constructing a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions; then, inputting the three-dimensional model of the target vehicle into the trained vehicle type recognition model for recognition; then, if the type of the target vehicle is recognized as a muck vehicle by the vehicle type recognition model, detecting whether license plate information meeting license plate character rules exists in vehicle images in multiple position directions; and finally, if the license plate information meeting the license plate character rule is not detected, the acquired vehicle image and the shooting place of the unmanned aerial vehicle are sent to a communication terminal. In the scheme, the three-dimensional model of the target vehicle is constructed through the vehicle images shot at multiple angles, the type of the target vehicle is identified based on the three-dimensional model of the target vehicle, and the accuracy of the type identification of the target vehicle can be ensured compared with the identification of the target vehicle in the two-dimensional image. In addition, when the target vehicle is a muck vehicle, the license plate detection is carried out on the vehicle images in a plurality of position directions, so that the problem of missing detection of the license plate is avoided. After the license plate is not detected, the vehicle image and the vehicle location are sent to the communication terminal, so that a patrol police holding the communication terminal can conveniently carry out on-site processing in time, serious traffic accidents caused by the unlicensed muck vehicle are avoided, and the traffic safety of traffic participants is ensured.
In a first aspect, the application provides a method for identifying a unlicensed muck vehicle based on an unmanned aerial vehicle, which is applied to a control center in communication connection with the unmanned aerial vehicle and a communication terminal respectively, and the method comprises the following steps:
acquiring vehicle images of a target vehicle shot by the unmanned aerial vehicle in multiple position directions, wherein the vehicle images comprise a vehicle head image, a vehicle tail image and images on two sides of a vehicle body;
constructing a three-dimensional model of the target vehicle based on the vehicle images of the plurality of position directions;
inputting the three-dimensional model of the target vehicle into a trained vehicle type recognition model for recognition;
if the vehicle type recognition model recognizes that the type of the target vehicle is a muck vehicle, detecting whether license plate information meeting license plate character rules exists in the vehicle images in the plurality of position directions;
and if the license plate information meeting the license plate character rule is not detected, sending the acquired vehicle image and the shooting location of the unmanned aerial vehicle to a communication terminal.
In one possible implementation, the step of constructing a three-dimensional model of the target vehicle based on the vehicle images of the plurality of position directions includes:
identifying two-dimensional position coordinates of different parts of the target vehicle in the vehicle images in the plurality of position directions and shooting angles corresponding to the vehicle images in the plurality of position directions;
acquiring a three-dimensional position coordinate set corresponding to the vehicle images in the plurality of position directions, and acquiring three-dimensional position coordinates corresponding to each position point in a target vehicle according to the corresponding relation between the three-dimensional position coordinate set and the corresponding position point of the target vehicle in the vehicle images in the plurality of position directions;
obtaining a plurality of target vehicle orientation maps based on the vehicle images in the plurality of position directions, and performing feature recognition processing on the vehicle images in the plurality of position directions based on a feature recognition layer in a vehicle detection model to obtain target vehicle position feature information corresponding to each target vehicle orientation map;
identifying position coordinates corresponding to the target vehicle position feature information, and taking areas corresponding to the position coordinates in the vehicle images in the plurality of position directions as vehicle bearing areas;
and splicing the positions of the same three-dimensional position coordinates in the vehicle orientation area based on the vehicle orientation area and the corresponding shooting angle to obtain the three-dimensional model of the target vehicle.
In one possible implementation manner, the step of inputting the three-dimensional model of the target vehicle into a trained vehicle type recognition model for recognition includes:
acquiring a three-dimensional model and a vehicle sample model of a target vehicle to be subjected to vehicle type identification;
calling a three-dimensional model identifier model in a vehicle type identification model to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing;
calling a vehicle type matching sub-model in a vehicle type identification model to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters with the extracted characteristic parameters, and identifying whether a three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result, wherein the characteristic parameters comprise a vehicle head-vehicle body ratio, a vehicle body length-vehicle body height ratio, a tipping wagon box-vehicle body ratio and a tipping wagon height-vehicle body ratio;
if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model;
wherein the model structure of the deep learning submodel is more complex than the model structures of the three-dimensional model identifier model and the vehicle type matching submodel;
performing size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
In one possible implementation, the three-dimensional model identifier model includes at least one parameter extraction layer, at least one parameter screening layer, and at least one parameter classification layer; calling a three-dimensional model identification submodel to respectively extract model parameters of the three-dimensional model and the vehicle sample model of the target vehicle, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing, wherein the steps comprise:
respectively extracting model parameters of the three-dimensional model of the target vehicle and the vehicle sample model by using a parameter extraction layer in a three-dimensional model identification submodel, and inputting the extracted model parameters into a parameter screening layer in the three-dimensional model identification submodel;
respectively performing parameter screening processing on model parameters of the three-dimensional model of the target vehicle and the vehicle sample model by using a parameter screening layer in the three-dimensional model identification submodel, and inputting the parameters subjected to the parameter screening processing into a parameter classification layer in the three-dimensional model identification submodel;
classifying by using a parameter classification layer in the three-dimensional model identification submodel according to the parameters after the parameter screening processing of the three-dimensional model of the target vehicle and the corresponding parameter of the vehicle sample model to obtain a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model;
the vehicle type matching submodel comprises at least one parameter extraction layer, at least one parameter screening layer and at least two parameter classification layers, the step of calling the vehicle type matching submodel to extract the characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters of the two target model areas, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result comprises the following steps:
respectively extracting parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result by adopting a parameter extraction layer in a vehicle type matching sub-model, and inputting the extracted parameters into a parameter screening layer in the vehicle type matching sub-model;
adopting a parameter screening layer in the vehicle type matching sub-model to screen parameters extracted by the parameter extraction layer, and inputting the parameters subjected to parameter screening processing into a parameter classification layer in the vehicle type matching sub-model;
and comparing the parameters subjected to parameter screening processing by adopting a parameter classification layer in the vehicle type matching sub-model, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to a comparison result.
In a possible implementation manner, the step of detecting whether license plate information meeting license plate character rules exists in the vehicle images in the plurality of position directions includes:
identifying and obtaining license plate image areas from the vehicle images in the plurality of position directions;
extracting characters in the license plate image area through edge detection;
detecting whether the arrangement sequence of the extracted characters meets the license plate character rule or not;
if the arrangement sequence of the characters meets the license plate character rule, identifying and determining corresponding target characters through a character identification model, and combining the target characters according to the sequence of the target characters in the license plate image after identification to obtain license plate information of the target vehicle;
wherein, the step of identifying and determining the corresponding target character through the character recognition model comprises the following steps:
counting the number of characters of the character result recognized by the character recognition model;
when a recognition character with the largest statistical number exists in a recognition result of the character recognition model for one character, taking the recognition character as the target character;
when a plurality of recognition characters with the largest statistical number exist in the recognition result of the character recognition model for one character recognition, the confidence probability of each recognition character with the largest number is calculated respectively, and the recognition character with the largest confidence probability is taken as the target character.
In a possible implementation manner, the step of sending the acquired vehicle image and the shooting location of the unmanned aerial vehicle to the communication terminal if the license plate information meeting the license plate character rule is not detected includes:
acquiring current state information of all communication terminals within a preset distance range from the target vehicle, wherein the position of the target vehicle is determined by the shooting position of an unmanned aerial vehicle shooting the target vehicle, and the current state information of the communication terminals comprises whether the current communication terminals have events being processed or not and predicted event processing completion time when the events being processed exist;
when the license plate information meeting the license plate character rule is not detected, generating a first event request to be processed;
pre-allocating the first event request to be processed to all communication terminals within a preset distance range from the target vehicle, calculating the time of each communication terminal for predicting the first event request to be processed based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting the first event request to be processed to obtain a case processing timeliness sequence of the communication terminals;
sending the first event to be processed request to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals;
according to the receiving confirmation information fed back by the communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal;
and packaging the acquired vehicle image and the position of the target vehicle into a first event to be processed and sending the first event to the target communication terminal.
In one possible implementation manner, when the license plate information satisfying the license plate character rule is detected, the method further includes:
acquiring vehicle registration information corresponding to the license plate information of the target vehicle from a vehicle management station database;
analyzing the vehicle registration information to determine a registered vehicle type corresponding to the license plate information of the target vehicle;
comparing the registered vehicle type with the target vehicle type;
when the registered vehicle type is inconsistent with the target vehicle type, generating a second event request to be processed, and acquiring current state information of all communication terminals within a preset distance range from the target vehicle, wherein the position of the target vehicle is determined by the shooting position of an unmanned aerial vehicle shooting the target vehicle, and the current state information of the communication terminals comprises whether the current communication terminals have events being processed or not and predicted event processing completion time when the events being processed exist;
pre-allocating the second event request to be processed to all communication terminals within a preset distance range from the target vehicle, calculating the time of each communication terminal for predicting the second event request to be processed based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting the second event request to be processed to obtain a case processing timeliness sequence of the communication terminals;
sending the second event request to be processed to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals;
according to the receiving confirmation information fed back by the communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal;
and packaging the acquired vehicle image and the position of the target vehicle into a second event to be processed and sending the second event to the target communication terminal.
In one possible implementation manner, when the license plate information satisfying the license plate character rule is detected, the method further includes:
acquiring the time of the target vehicle appearing on a target road section, wherein the time of the target vehicle appearing on the target road section is determined by the shooting time of the unmanned aerial vehicle;
and comparing the time of the target vehicle appearing on the target road section with the preset passable time of the target road section, and if the target vehicle runs on the target road section beyond the preset passable time of the target road section, sending the vehicle image shot by the unmanned aerial vehicle and the shooting place of the unmanned aerial vehicle to a communication terminal.
In a second aspect, a tablet-free muck vehicle identification system based on an unmanned aerial vehicle is applied to a control center which is respectively in communication connection with the unmanned aerial vehicle and a communication terminal, and the system comprises:
the unmanned aerial vehicle comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring vehicle images of a target vehicle shot by the unmanned aerial vehicle in multiple position directions, and the vehicle images comprise a vehicle head image, a vehicle tail image and images on two sides of a vehicle body;
the construction module is used for constructing and obtaining a three-dimensional model of the target vehicle based on the vehicle images of the plurality of position directions;
the identification module is used for inputting the three-dimensional model of the target vehicle into a trained vehicle type identification model for identification;
the detection module is used for detecting whether license plate information meeting license plate character rules exists in the vehicle images in the plurality of position directions if the vehicle type recognition model recognizes that the type of the target vehicle is a muck vehicle;
and the sending module is used for sending the acquired vehicle image and the shooting place of the unmanned aerial vehicle to the communication terminal if the license plate information meeting the license plate character rule is not detected.
In a possible implementation manner, the identification module is specifically configured to:
acquiring a three-dimensional model and a vehicle sample model of a target vehicle to be subjected to vehicle type identification;
calling a three-dimensional model identifier model in a vehicle type identification model to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing;
calling a vehicle type matching sub-model in a vehicle type identification model to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters with the extracted characteristic parameters, and identifying whether a three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result, wherein the characteristic parameters comprise a vehicle head-vehicle body ratio, a vehicle body length-vehicle body height ratio, a tipping wagon box-vehicle body ratio and a tipping wagon height-vehicle body ratio;
if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model;
wherein the model structure of the deep learning submodel is more complex than the model structures of the three-dimensional model identifier model and the vehicle type matching submodel;
performing size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
In a third aspect, an embodiment of the present application provides a control center, where the control center includes a processor, a computer-readable storage medium, and a communication unit, where the computer-readable storage medium, the communication unit, and the processor are connected through a bus interface, the communication unit is configured to be in communication connection with a communication terminal and an unmanned aerial vehicle, the computer-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the computer-readable storage medium, so as to execute the unmanned aerial vehicle-based unlicensed muck vehicle identification method in any one of the first aspects.
Based on any one of the above aspects, first, vehicle images of a plurality of position directions of a target vehicle shot by an unmanned aerial vehicle are obtained; then, constructing a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions; then, inputting the three-dimensional model of the target vehicle into the trained vehicle type recognition model for recognition; then, if the type of the target vehicle is recognized as a muck vehicle by the vehicle type recognition model, detecting whether license plate information meeting license plate character rules exists in vehicle images in multiple position directions; and finally, if the license plate information meeting the license plate character rule is not detected, the acquired vehicle image and the shooting place of the unmanned aerial vehicle are sent to a communication terminal. In the scheme, the three-dimensional model of the target vehicle is constructed through the vehicle images shot at multiple angles, the type of the target vehicle is identified based on the three-dimensional model of the target vehicle, and the accuracy of the type identification of the target vehicle can be ensured compared with the identification of the target vehicle in the two-dimensional image. In addition, when the target vehicle is a muck vehicle, the license plate detection is carried out on the vehicle images in a plurality of position directions, so that the problem of missing detection of the license plate is avoided. After the license plate is not detected, the vehicle image and the vehicle location are sent to the communication terminal, so that a patrol police holding the communication terminal can conveniently carry out on-site processing in time, serious traffic accidents caused by the unlicensed muck vehicle are avoided, and the traffic safety of traffic participants is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that need to be called in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic block diagram of an application scenario of a unlicensed muck vehicle identification method based on an unmanned aerial vehicle according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for identifying a unlicensed muck vehicle based on an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a sub-step of step S103 in FIG. 2;
fig. 4 is a functional module diagram of a unlicensed muck vehicle recognition system based on an unmanned aerial vehicle according to an embodiment of the present application.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is a block diagram illustrating an application scenario of the unmanned aerial vehicle-based unlicensed muck vehicle identification method according to the embodiment of the present application. The system can include the unmanned aerial vehicle 10, the control center 20 and the communication terminal 30 that are connected in communication in this application scenario, wherein, the communication terminal 30 can be the equipment (for example, smart phone) that is used for receiving control center 20 and sends information, and the unmanned aerial vehicle 10 can be unmanned aerial vehicle of each type, does not restrict specific type or model in this application embodiment, and any unmanned aerial vehicle 10 that has image shooting, data transmission, position location function can all be suitable for this application.
The control center 20 may be implemented on a cloud server. The control center 20 may include a processor 210, a computer-readable storage medium 220, a bus 230, and a communication unit 240.
In a specific implementation process, the at least one processor 210 executes computer-executable instructions stored in the computer-readable storage medium 220, so that the processor 210 may perform the steps performed by the control center 20 in the embodiment of the present application, where the processor 210, the computer-readable storage medium 220, and the communication unit 240 are connected through the bus 230, and the processor 210 may be configured to control transceiving actions of the communication unit 240, for example, information transceiving actions between the unmanned aerial vehicle 10 and the communication terminal 30, and the control center 20.
The computer-readable storage medium 220 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 230 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In order to solve the technical problem in the foregoing background art, the flow schematic diagram of the method for identifying the unlicensed muck vehicle based on the unmanned aerial vehicle according to the embodiment of the present application shown in fig. 2 is combined below to describe in detail the method for identifying the unlicensed muck vehicle based on the unmanned aerial vehicle according to the embodiment of the present application.
In step S101, vehicle images of a plurality of position directions of the target vehicle captured by the drone 10 are acquired.
In the embodiment of the application, the vehicle image may include a vehicle head image, a vehicle tail image and a vehicle body two-side image. Specifically, the vehicle image may be obtained by: the unmanned aerial vehicle 10 obtains images of the target vehicle at a plurality of angles by cruising shooting, and then obtains the vehicle images of the position directions from the images at the plurality of angles. Taking the acquisition of the images of the vehicle head as an example, an image of the vehicle head with a ratio of the vehicle head to the whole vehicle reaching a preset ratio (for example, 50%) is selected from the images of a plurality of angles as the image of the vehicle head, and similarly, the method can be used to acquire the images of the vehicles in other positions and directions.
The vehicle images in the multiple position directions are obtained, on one hand, a three-dimensional vehicle model can be conveniently built on the basis of the two-dimensional vehicle images in the multiple position directions, so that the type of the target vehicle can be identified and determined on the basis of the three-dimensional vehicle model, compared with a mode of identifying the vehicle type according to the two-dimensional vehicle image, the built three-dimensional vehicle model can embody more vehicle shape characteristics, the vehicle shape characteristics are not lost due to the problem of the shooting angle of the two-dimensional image, and the accuracy of vehicle type identification can be improved. In addition, for a large truck such as a muck truck, the license plate information may be located at the head, the tail and two sides of the carriage, the vehicle images in multiple positions and directions are obtained, the condition that the license plate information is missed to be detected, misjudgment is caused, and the workload of a patrol police is increased can be avoided.
And S102, constructing and obtaining a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions.
In the embodiment of the application, the three-dimensional model of the target vehicle can be obtained by splicing the vehicle images in a plurality of position directions, and the three-dimensional model is constructed from the two-dimensional image in the following specific manner.
First, two-dimensional position coordinates of different portions of the target vehicle in vehicle images in a plurality of position directions and capturing angles corresponding to the vehicle images in the plurality of position directions are recognized.
Then, a three-dimensional position coordinate set corresponding to the vehicle images in the plurality of position directions is acquired, and three-dimensional position coordinates corresponding to each position point in the target vehicle are acquired according to the corresponding relation between the three-dimensional position coordinate set and the corresponding position point of the target vehicle in the vehicle images in the plurality of position directions. Specifically, the same target position of the target vehicle corresponds to the same three-dimensional position coordinates in different vehicle images.
Then, a plurality of target vehicle orientation maps are obtained based on the vehicle images in the plurality of position directions, and feature recognition processing is performed on the vehicle images in the plurality of position directions based on a feature recognition layer in the vehicle detection model to obtain target vehicle position feature information corresponding to each target vehicle orientation map, where the target vehicle position feature information may include specific positions of the vehicle, such as positions of a head mirror, a head windshield, wheels, and the like.
Then, the position coordinates corresponding to the target vehicle position feature information are identified, and the area corresponding to the position coordinates in the vehicle images in the plurality of position directions is set as the vehicle bearing area.
And finally, based on the vehicle orientation area and the corresponding shooting angle, splicing the positions of the same three-dimensional position coordinates in the vehicle orientation area to obtain a three-dimensional model of the target vehicle.
And step S103, inputting the three-dimensional model of the target vehicle into the trained vehicle type recognition model for recognition.
In this step, the trained vehicle type recognition model can recognize various types of vehicles, for example, can recognize cars, buses, trucks, and the like, and it can be understood that in the embodiment of the present application, the vehicle type recognition model can be trained in a supervised manner, so that the vehicle type recognition model can recognize different types of mucks (for example, dump truck type mucks or bumblebee type mucks).
And step S104, if the type of the target vehicle identified by the vehicle type identification model is a muck vehicle, detecting whether license plate information meeting license plate character rules exists in the vehicle images in multiple position directions.
The license plate information of the muck car can be located on the car head, the car tail and the carriages on the two sides of the car body, and the detection of the car images in the position directions can ensure that the license plate information cannot be missed.
And step S105, if the license plate information meeting the license plate character rule is not detected, sending the acquired vehicle image and the shooting location of the unmanned aerial vehicle to the communication terminal.
And sending the image of the muck vehicle without the detected license plate information and the appearance place to a communication terminal, and processing the muck vehicle by a patrol police in time so as to prevent traffic accidents caused by intentionally illegal traffic rules due to the fact that the muck vehicle does not have the license plate information.
According to the scheme, firstly, vehicle images of a plurality of position directions of a target vehicle shot by an unmanned aerial vehicle are obtained; then, constructing a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions; then, inputting the three-dimensional model of the target vehicle into the trained vehicle type recognition model for recognition; then, if the type of the target vehicle is recognized as a muck vehicle by the vehicle type recognition model, detecting whether license plate information meeting license plate character rules exists in vehicle images in multiple position directions; and finally, if the license plate information meeting the license plate character rule is not detected, the acquired vehicle image and the shooting place of the unmanned aerial vehicle are sent to a communication terminal. In the scheme, the three-dimensional model of the target vehicle is constructed through the vehicle images shot at multiple angles, the type of the target vehicle is identified based on the three-dimensional model of the target vehicle, and the accuracy of the type identification of the target vehicle can be ensured compared with the identification of the target vehicle in the two-dimensional image. In addition, when the target vehicle is a muck vehicle, the license plate detection is carried out on the vehicle images in a plurality of position directions, so that the problem of missing detection of the license plate is avoided. After the license plate is not detected, the vehicle image and the vehicle location are sent to the communication terminal, so that a patrol police holding the communication terminal can conveniently carry out on-site processing in time, serious traffic accidents caused by the unlicensed muck vehicle are avoided, and the traffic safety of traffic participants is ensured.
In this embodiment, the vehicle type identification model may include a three-dimensional model identification submodel, a vehicle type matching submodel, and a deep learning submodel, where a model structure of the deep learning submodel is complex compared with model structures of the three-dimensional model identification submodel and the vehicle type matching submodel. Referring to fig. 3, step S103 can be implemented by the following sub-steps.
And a substep S1031 of obtaining a three-dimensional model and a vehicle sample model of the target vehicle to be subjected to vehicle type identification.
The vehicle sample model may include, among other things, three-dimensional models of a plurality of different types of dirt vehicles.
And a substep S1032 of calling a three-dimensional model identification submodel to respectively extract model parameters of the three-dimensional model and the vehicle sample model of the target vehicle, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing.
The three-dimensional model identification submodel comprises at least one parameter extraction layer, at least one parameter screening layer and at least one parameter classification layer, and the substep can be realized in the following mode.
Firstly, a parameter extraction layer in a three-dimensional model identification submodel is adopted to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, and the extracted model parameters are input into a parameter screening layer in the three-dimensional model identification submodel.
And then, respectively carrying out parameter screening processing on the model parameters of the three-dimensional model of the target vehicle and the vehicle sample model by adopting a parameter screening layer in the three-dimensional model identification submodel, and inputting the parameters after the parameter screening processing into a parameter classification layer in the three-dimensional model identification submodel.
And finally, classifying by using a parameter classification layer in the three-dimensional model identification submodel according to the three-dimensional model of the target vehicle and the parameters after the parameter screening processing corresponding to the vehicle sample model to obtain a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model.
And a substep S1033 of calling a vehicle type matching submodel to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters of the first target model area and the second target model area, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result.
The characteristic parameters comprise the ratio of the vehicle head to the vehicle body, the ratio of the length of the vehicle body to the height of the vehicle body, the ratio of the tipping wagon box to the vehicle body and the ratio of the height of the tipping wagon to the height of the vehicle body. In this sub-step, the first target model region and the second target model region may be determined in the following manner: carrying out size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
In this sub-step, the vehicle type matching sub-model includes at least one parameter extraction layer, at least one parameter screening layer, and at least two parameter classification layers, and the sub-step S1033 may be implemented as follows.
Firstly, parameter extraction is respectively carried out on a first target model area corresponding to a first vehicle type result and a second target model area corresponding to a second vehicle type result by adopting a parameter extraction layer in a vehicle type matching sub-model, and the extracted parameters are input into a parameter screening layer in the vehicle type matching sub-model.
And then, adopting a parameter screening layer in the vehicle type matching sub-model to screen the parameters extracted by the parameter extraction layer, and inputting the parameters subjected to parameter screening processing into a parameter classification layer in the vehicle type matching sub-model.
And finally, comparing the parameters subjected to parameter screening by using a parameter classification layer in the vehicle type matching sub-model, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to a comparison result.
And S1034, if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model. Wherein the model categories correspond to different types of the slag car.
In the embodiment of the present application, step S104 may be implemented in the following manner.
Firstly, a license plate image area is identified from the vehicle images in the plurality of position directions.
And then, extracting characters in the license plate image area through edge detection.
Then, whether the arrangement sequence of the extracted characters meets the license plate character rule or not is detected.
The license plate character rule is generally 'province abbreviation + corresponding letter in the city + five-bit string', taking the license plate 'black A x H x Y' as an example, black represents Heilongjiang province, and black A represents Heilongjiang Harbin city.
And finally, if the arrangement sequence of the characters meets the license plate character rule, identifying and determining corresponding target characters through a character identification model, and combining the target characters according to the sequence of the target characters in the license plate image after identification to obtain the license plate information of the target vehicle.
Specifically, counting the number of characters of the character result recognized by the character recognition model; when a recognition character with the largest statistical number exists in a recognition result of the character recognition model for one character, taking the recognition character as the target character; when a plurality of recognition characters with the largest statistical number exist in the recognition result of the character recognition model for one character recognition, the confidence probability of each recognition character with the largest number is calculated respectively, and the recognition character with the largest confidence probability is taken as the target character.
In the embodiment of the present application, step S105 may be implemented in the following manner.
Firstly, when the license plate information meeting the license plate character rule is not detected, a first event request to be processed is generated.
The first event to be processed request comprises processing event information, such as intercepting and checking a muck vehicle without a license plate.
Then, pre-allocating a first event request to be processed to all communication terminals within a preset distance range (for example, within 5 km) from the target vehicle, calculating the time of each communication terminal for predicting processing the first event request based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting processing the first event request to be processed to obtain a case processing timeliness sequence of the communication terminals.
And then, sending the first event to be processed request to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals.
And then, according to the receiving confirmation information fed back by the plurality of communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal.
And finally, packaging the acquired vehicle image and the position of the target vehicle into a first event to be processed and sending the first event to the target communication terminal.
In the embodiment of the application, the processing event of the unlicensed inspection of the muck vehicle can be sent to a patrol police (communication terminal) which can process the event in time through the mode, so that the illegal action can be processed in time.
In this embodiment of the present application, when detecting license plate information that satisfies a license plate character rule, the method further includes:
acquiring vehicle registration information corresponding to the license plate information of the target vehicle from a vehicle management station database;
analyzing the vehicle registration information to determine a registered vehicle type corresponding to the license plate information of the target vehicle;
comparing the registered vehicle type with the target vehicle type;
when the registered vehicle type is inconsistent with the target vehicle type, generating a second event request to be processed, and acquiring current state information of all communication terminals within a preset distance range from the target vehicle, wherein the position of the target vehicle is determined by the shooting position of an unmanned aerial vehicle shooting the target vehicle, and the current state information of the communication terminals comprises whether the current communication terminals have events being processed or not and predicted event processing completion time when the events being processed exist;
pre-allocating the second event request to be processed to all communication terminals within a preset distance range from the target vehicle, calculating the time of each communication terminal for predicting the second event request to be processed based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting the second event request to be processed to obtain a case processing timeliness sequence of the communication terminals;
sending the second event request to be processed to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals;
according to the receiving confirmation information fed back by the communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal;
and packaging the acquired vehicle image and the position of the target vehicle into a second event to be processed and sending the second event to the target communication terminal.
Through the processing, illegal behaviors that license plate information of the muck vehicle is inconsistent with the types of the registered vehicles can be checked, so that the muck vehicle is prevented from avoiding traffic punishment in a fake-license mode and being ignorant of traffic rules, and potential safety hazards are brought to road traffic.
Generally, the time of getting on the road of the muck truck is controlled in a large city, the muck truck can only enter a target road section in a specified time period, and in order to detect the behavior of getting on the road of the muck truck, the method for identifying the nameless muck truck provided by the embodiment of the application can further comprise the following steps:
acquiring the time of the target vehicle appearing on a target road section, wherein the time of the target vehicle appearing on the target road section is determined by the shooting time of the unmanned aerial vehicle;
and comparing the time of the target vehicle appearing on the target road section with the preset passable time of the target road section, and if the target vehicle runs on the target road section beyond the preset passable time of the target road section, sending the vehicle image shot by the unmanned aerial vehicle and the shooting place of the unmanned aerial vehicle to a communication terminal.
Referring to fig. 4, fig. 4 is a schematic diagram of functional modules of a unlicensed muck vehicle identification system 300 based on an unmanned aerial vehicle according to an embodiment of the present disclosure, in this embodiment, the functional modules of the unlicensed muck vehicle identification system 300 based on an unmanned aerial vehicle may be divided according to the above method embodiments, that is, the following functional modules corresponding to the unlicensed muck vehicle identification system 300 based on an unmanned aerial vehicle may be used in each method embodiment. The unlicensed muck vehicle identification system 300 based on the unmanned aerial vehicle may include an acquisition module 310, a construction module 320, an identification module 330, a detection module 340, and a sending module 350, and the functions of the functional modules of the unlicensed muck vehicle identification system 300 based on the unmanned aerial vehicle are described in detail below.
An obtaining module 310, configured to obtain vehicle images of a plurality of position directions of a target vehicle photographed by the unmanned aerial vehicle, where the vehicle images include a vehicle head image, a vehicle tail image, and images of two sides of a vehicle body.
In the embodiment of the application, the vehicle image may include a vehicle head image, a vehicle tail image and a vehicle body two-side image. Specifically, the vehicle image may be obtained by: the unmanned aerial vehicle 10 obtains images of the target vehicle at a plurality of angles by cruising shooting, and then obtains the vehicle images of the position directions from the images at the plurality of angles. Taking the acquisition of the images of the vehicle head as an example, an image of the vehicle head with a ratio of the vehicle head to the whole vehicle reaching a preset ratio (for example, 50%) is selected from the images of a plurality of angles as the image of the vehicle head, and similarly, the method can be used to acquire the images of the vehicles in other positions and directions.
The obtaining module 310 obtains vehicle images in a plurality of position directions, on one hand, it is convenient to construct a three-dimensional vehicle model based on two-dimensional vehicle images in a plurality of position directions, so as to identify and determine the type of a target vehicle based on the three-dimensional vehicle model, and compared with a way of identifying the type of a vehicle according to a two-dimensional vehicle image, the constructed three-dimensional vehicle model can embody more vehicle shape features, and the vehicle shape features are not lost due to the problem of a two-dimensional image shooting angle, so that the accuracy of vehicle type identification can be improved. In addition, for a large truck such as a muck truck, the license plate information may be located at the head, the tail and two sides of the carriage, the vehicle images in multiple positions and directions are obtained, the condition that the license plate information is missed to be detected, misjudgment is caused, and the workload of a patrol police is increased can be avoided.
A building module 320, configured to build a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions.
In this embodiment of the application, the building module 320 may obtain a three-dimensional model of the target vehicle by stitching vehicle images in a plurality of position directions, and build the three-dimensional model from the two-dimensional image in the following specific manner.
First, two-dimensional position coordinates of different portions of the target vehicle in vehicle images in a plurality of position directions and capturing angles corresponding to the vehicle images in the plurality of position directions are recognized.
Then, a three-dimensional position coordinate set corresponding to the vehicle images in the plurality of position directions is acquired, and three-dimensional position coordinates corresponding to each position point in the target vehicle are acquired according to the corresponding relation between the three-dimensional position coordinate set and the corresponding position point of the target vehicle in the vehicle images in the plurality of position directions. Specifically, the same target position of the target vehicle corresponds to the same three-dimensional position coordinates in different vehicle images.
Then, a plurality of target vehicle orientation maps are obtained based on the vehicle images in the plurality of position directions, and feature recognition processing is performed on the vehicle images in the plurality of position directions based on a feature recognition layer in the vehicle detection model to obtain target vehicle position feature information corresponding to each target vehicle orientation map, where the target vehicle position feature information may include specific positions of the vehicle, such as positions of a head mirror, a head windshield, wheels, and the like.
Then, the position coordinates corresponding to the target vehicle position feature information are identified, and the area corresponding to the position coordinates in the vehicle images in the plurality of position directions is set as the vehicle bearing area.
And finally, based on the vehicle orientation area and the corresponding shooting angle, splicing the positions of the same three-dimensional position coordinates in the vehicle orientation area to obtain a three-dimensional model of the target vehicle.
And the identification module 330 is configured to input the three-dimensional model of the target vehicle into the trained vehicle type identification model for identification.
The trained vehicle type identification model can identify various types of vehicles, such as cars, buses, trucks and the like, and it can be understood that in the embodiment of the application, the vehicle type identification model can be trained in a supervised manner, so that the vehicle type identification model can identify different types of mucks (such as tipping bucket type mucks or bumblebee type mucks).
The detecting module 340 is configured to detect whether license plate information meeting license plate character rules exists in the vehicle images in the multiple position directions if the vehicle type recognition model recognizes that the type of the target vehicle is a muck vehicle.
The license plate information of the muck car can be located on the car head, the car tail and the carriages on the two sides of the car body, and the detection of the car images in the position directions can ensure that the license plate information cannot be missed.
And the sending module 350 is configured to send the acquired vehicle image and the shooting location of the unmanned aerial vehicle to the communication terminal if the license plate information meeting the license plate character rule is not detected.
And sending the image of the muck vehicle without the detected license plate information and the appearance place to a communication terminal, and processing the muck vehicle by a patrol police in time so as to prevent traffic accidents caused by intentionally illegal traffic rules due to the fact that the muck vehicle does not have the license plate information.
In a possible implementation manner of the embodiment of the present application, the identification module 330 is specifically configured to:
acquiring a three-dimensional model and a vehicle sample model of a target vehicle to be subjected to vehicle type identification;
calling a three-dimensional model identifier model in a vehicle type identification model to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing;
calling a vehicle type matching sub-model in a vehicle type identification model to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters with the extracted characteristic parameters, and identifying whether a three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result, wherein the characteristic parameters comprise a vehicle head-vehicle body ratio, a vehicle body length-vehicle body height ratio, a tipping wagon box-vehicle body ratio and a tipping wagon height-vehicle body ratio;
if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model;
wherein the model structure of the deep learning submodel is more complex than the model structures of the three-dimensional model identifier model and the vehicle type matching submodel;
performing size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
It should be noted that the division of the modules of the above system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules may all be implemented in software (e.g., open source software) invoked by the processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the recognition module 330 may be a separate processing element, or may be integrated into a chip of the system, or may be stored in a memory of the system in the form of program code, and the function of the recognition module 330 may be called and executed by a processing element of the system. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
The application aims to provide a license-free muck vehicle identification method and system based on an unmanned aerial vehicle, and the method comprises the following steps of firstly, obtaining vehicle images of a plurality of position directions of a target vehicle shot by the unmanned aerial vehicle; then, constructing a three-dimensional model of the target vehicle based on the vehicle images in the plurality of position directions; then, inputting the three-dimensional model of the target vehicle into the trained vehicle type recognition model for recognition; then, if the type of the target vehicle is recognized as a muck vehicle by the vehicle type recognition model, detecting whether license plate information meeting license plate character rules exists in vehicle images in multiple position directions; and finally, if the license plate information meeting the license plate character rule is not detected, the acquired vehicle image and the shooting place of the unmanned aerial vehicle are sent to a communication terminal. In the scheme, the three-dimensional model of the target vehicle is constructed through the vehicle images shot at multiple angles, the type of the target vehicle is identified based on the three-dimensional model of the target vehicle, and the accuracy of the type identification of the target vehicle can be ensured compared with the identification of the target vehicle in the two-dimensional image. In addition, when the target vehicle is a muck vehicle, the license plate detection is carried out on the vehicle images in a plurality of position directions, so that the problem of missing detection of the license plate is avoided. After the license plate is not detected, the vehicle image and the vehicle location are sent to the communication terminal, so that a patrol police holding the communication terminal can conveniently carry out on-site processing in time, serious traffic accidents caused by the unlicensed muck vehicle are avoided, and the traffic safety of traffic participants is ensured.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Finally, it should be understood that the examples in this specification are only intended to illustrate the principles of the examples in this specification. Other variations are also possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A nameless muck vehicle identification method based on an unmanned aerial vehicle is characterized in that the method is applied to a control center which is respectively in communication connection with the unmanned aerial vehicle and a communication terminal, and the method comprises the following steps:
acquiring vehicle images of a target vehicle shot by the unmanned aerial vehicle in multiple position directions, wherein the vehicle images comprise a vehicle head image, a vehicle tail image and images on two sides of a vehicle body;
constructing a three-dimensional model of the target vehicle based on the vehicle images of the plurality of position directions;
inputting the three-dimensional model of the target vehicle into a trained vehicle type recognition model for recognition;
if the vehicle type recognition model recognizes that the type of the target vehicle is a muck vehicle, detecting whether license plate information meeting license plate character rules exists in the vehicle images in the plurality of position directions;
and if the license plate information meeting the license plate character rule is not detected, sending the acquired vehicle image and the shooting location of the unmanned aerial vehicle to a communication terminal.
2. The method of claim 1, wherein the step of constructing a three-dimensional model of the target vehicle based on the vehicle images of the plurality of location directions comprises:
identifying two-dimensional position coordinates of different parts of the target vehicle in the vehicle images in the plurality of position directions and shooting angles corresponding to the vehicle images in the plurality of position directions;
acquiring a three-dimensional position coordinate set corresponding to the vehicle images in the plurality of position directions, and acquiring three-dimensional position coordinates corresponding to each position point in a target vehicle according to the corresponding relation between the three-dimensional position coordinate set and the corresponding position point of the target vehicle in the vehicle images in the plurality of position directions;
obtaining a plurality of target vehicle orientation maps based on the vehicle images in the plurality of position directions, and performing feature recognition processing on the vehicle images in the plurality of position directions based on a feature recognition layer in a vehicle detection model to obtain target vehicle position feature information corresponding to each target vehicle orientation map;
identifying position coordinates corresponding to the target vehicle position feature information, and taking areas corresponding to the position coordinates in the vehicle images in the plurality of position directions as vehicle bearing areas;
and splicing the positions of the same three-dimensional position coordinates in the vehicle orientation area based on the vehicle orientation area and the corresponding shooting angle to obtain the three-dimensional model of the target vehicle.
3. The method of claim 1, wherein the step of inputting the three-dimensional model of the target vehicle into a trained vehicle type recognition model for recognition comprises:
acquiring a three-dimensional model and a vehicle sample model of a target vehicle to be subjected to vehicle type identification;
calling a three-dimensional model identifier model in a vehicle type identification model to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing;
calling a vehicle type matching sub-model in a vehicle type identification model to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters with the extracted characteristic parameters, and identifying whether a three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result, wherein the characteristic parameters comprise a vehicle head-vehicle body ratio, a vehicle body length-vehicle body height ratio, a tipping wagon box-vehicle body ratio and a tipping wagon height-vehicle body ratio;
if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model;
wherein the model structure of the deep learning submodel is more complex than the model structures of the three-dimensional model identifier model and the vehicle type matching submodel;
performing size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
4. The method of claim 3, wherein the three-dimensional model identification submodel comprises at least one parameter extraction layer, at least one parameter screening layer, and at least one parameter classification layer; calling a three-dimensional model identification submodel to respectively extract model parameters of the three-dimensional model and the vehicle sample model of the target vehicle, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing, wherein the steps comprise:
respectively extracting model parameters of the three-dimensional model of the target vehicle and the vehicle sample model by using a parameter extraction layer in a three-dimensional model identification submodel, and inputting the extracted model parameters into a parameter screening layer in the three-dimensional model identification submodel;
respectively performing parameter screening processing on model parameters of the three-dimensional model of the target vehicle and the vehicle sample model by using a parameter screening layer in the three-dimensional model identification submodel, and inputting the parameters subjected to the parameter screening processing into a parameter classification layer in the three-dimensional model identification submodel;
classifying by using a parameter classification layer in the three-dimensional model identification submodel according to the parameters after the parameter screening processing of the three-dimensional model of the target vehicle and the corresponding parameter of the vehicle sample model to obtain a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model;
the vehicle type matching submodel comprises at least one parameter extraction layer, at least one parameter screening layer and at least two parameter classification layers, the step of calling the vehicle type matching submodel to extract the characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters of the two target model areas, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result comprises the following steps:
respectively extracting parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result by adopting a parameter extraction layer in a vehicle type matching sub-model, and inputting the extracted parameters into a parameter screening layer in the vehicle type matching sub-model;
adopting a parameter screening layer in the vehicle type matching sub-model to screen parameters extracted by the parameter extraction layer, and inputting the parameters subjected to parameter screening processing into a parameter classification layer in the vehicle type matching sub-model;
and comparing the parameters subjected to parameter screening processing by adopting a parameter classification layer in the vehicle type matching sub-model, and identifying whether the three-dimensional model of the target vehicle is matched with the vehicle sample model according to a comparison result.
5. The method of claim 1, wherein the step of detecting whether license plate information satisfying license plate character rules exists in the images of the vehicles in the plurality of position directions comprises:
identifying and obtaining license plate image areas from the vehicle images in the plurality of position directions;
extracting characters in the license plate image area through edge detection;
detecting whether the arrangement sequence of the extracted characters meets the license plate character rule or not;
if the arrangement sequence of the characters meets the license plate character rule, identifying and determining corresponding target characters through a character identification model, and combining the target characters according to the sequence of the target characters in the license plate image after identification to obtain license plate information of the target vehicle;
wherein, the step of identifying and determining the corresponding target character through the character recognition model comprises the following steps:
counting the number of characters of the character result recognized by the character recognition model;
when a recognition character with the largest statistical number exists in a recognition result of the character recognition model for one character, taking the recognition character as the target character;
when a plurality of recognition characters with the largest statistical number exist in the recognition result of the character recognition model for one character recognition, the confidence probability of each recognition character with the largest number is calculated respectively, and the recognition character with the largest confidence probability is taken as the target character.
6. The method for identifying the unlicensed muck vehicle according to claim 1, wherein the step of sending the acquired vehicle image and the shooting location of the unmanned aerial vehicle to a communication terminal if the license plate information meeting the license plate character rule is not detected comprises:
acquiring current state information of all communication terminals within a preset distance range from the target vehicle, wherein the position of the target vehicle is determined by the shooting position of an unmanned aerial vehicle shooting the target vehicle, and the current state information of the communication terminals comprises whether the current communication terminals have events being processed or not and predicted event processing completion time when the events being processed exist;
when the license plate information meeting the license plate character rule is not detected, generating a first event request to be processed;
pre-allocating the first event request to be processed to all communication terminals within a preset distance range from the target vehicle, calculating the time of each communication terminal for predicting the first event request to be processed based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting the first event request to be processed to obtain a case processing timeliness sequence of the communication terminals;
sending the first event to be processed request to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals;
according to the receiving confirmation information fed back by the communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal;
and packaging the acquired vehicle image and the position of the target vehicle into a first event to be processed and sending the first event to the target communication terminal.
7. The method of identifying a license-free muck vehicle of claim 1, wherein upon detecting license plate information that satisfies license plate character rules, the method further comprises:
acquiring vehicle registration information corresponding to the license plate information of the target vehicle from a vehicle management station database;
analyzing the vehicle registration information to determine a registered vehicle type corresponding to the license plate information of the target vehicle;
comparing the registered vehicle type with the target vehicle type;
when the registered vehicle type is inconsistent with the target vehicle type, generating a second event request to be processed, and acquiring current state information of all communication terminals within a preset distance range from the target vehicle, wherein the position of the target vehicle is determined by the shooting position of an unmanned aerial vehicle shooting the target vehicle, and the current state information of the communication terminals comprises whether the current communication terminals have events being processed or not and predicted event processing completion time when the events being processed exist;
pre-allocating the second event request to be processed to all communication terminals within a preset distance range from the target vehicle, calculating the time of each communication terminal for predicting the second event request to be processed based on the current state information of the communication terminals, and sequencing the communication terminals according to the time for predicting the second event request to be processed to obtain a case processing timeliness sequence of the communication terminals;
sending the second event request to be processed to a plurality of communication terminals with preset ranks in the case processing timeliness sequence of the communication terminals;
according to the receiving confirmation information fed back by the communication terminals, the communication terminal which is the most front communication terminal in the case processing timeliness sequence of the communication terminals in the receiving communication terminals is used as a target communication terminal;
and packaging the acquired vehicle image and the position of the target vehicle into a second event to be processed and sending the second event to the target communication terminal.
8. The method of identifying a license-free muck vehicle according to any one of claims 1-7, wherein upon detecting license plate information satisfying the license plate character rules, the method further comprises:
acquiring the time of the target vehicle appearing on a target road section, wherein the time of the target vehicle appearing on the target road section is determined by the shooting time of the unmanned aerial vehicle;
and comparing the time of the target vehicle appearing on the target road section with the preset passable time of the target road section, and if the target vehicle runs on the target road section beyond the preset passable time of the target road section, sending the vehicle image shot by the unmanned aerial vehicle and the shooting place of the unmanned aerial vehicle to a communication terminal.
9. The utility model provides a no tablet dregs car identification system based on unmanned aerial vehicle which characterized in that is applied to respectively with unmanned aerial vehicle and communication terminal communication connection's control center, the system includes:
the unmanned aerial vehicle comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring vehicle images of a target vehicle shot by the unmanned aerial vehicle in multiple position directions, and the vehicle images comprise a vehicle head image, a vehicle tail image and images on two sides of a vehicle body;
the construction module is used for constructing and obtaining a three-dimensional model of the target vehicle based on the vehicle images of the plurality of position directions;
the identification module is used for inputting the three-dimensional model of the target vehicle into a trained vehicle type identification model for identification;
the detection module is used for detecting whether license plate information meeting license plate character rules exists in the vehicle images in the plurality of position directions if the vehicle type recognition model recognizes that the type of the target vehicle is a muck vehicle;
and the sending module is used for sending the acquired vehicle image and the shooting place of the unmanned aerial vehicle to the communication terminal if the license plate information meeting the license plate character rule is not detected.
10. The unmanned aerial vehicle-based unlicensed muck vehicle identification system of claim 9, wherein the identification module is specifically configured to:
acquiring a three-dimensional model and a vehicle sample model of a target vehicle to be subjected to vehicle type identification;
calling a three-dimensional model identifier model in a vehicle type identification model to respectively extract model parameters of the three-dimensional model of the target vehicle and the vehicle sample model, performing parameter screening processing on the extracted model parameters, and obtaining a first vehicle type result of the three-dimensional model of the target vehicle and a second vehicle type result of the vehicle sample model according to the parameters after the parameter screening processing;
calling a vehicle type matching sub-model in a vehicle type identification model to extract characteristic parameters of a first target model area corresponding to the first vehicle type result and a second target model area corresponding to the second vehicle type result, comparing the extracted characteristic parameters with the extracted characteristic parameters, and identifying whether a three-dimensional model of the target vehicle is matched with the vehicle sample model according to the comparison result, wherein the characteristic parameters comprise a vehicle head-vehicle body ratio, a vehicle body length-vehicle body height ratio, a tipping wagon box-vehicle body ratio and a tipping wagon height-vehicle body ratio;
if the three-dimensional model of the target vehicle is not matched with the vehicle sample model, identifying the three-dimensional model of the target vehicle and the model category corresponding to the vehicle sample model, and selecting a deep learning sub-model in a vehicle type identification model according to the model category to perform type identification processing on the three-dimensional model of the target vehicle and the vehicle sample model;
wherein the model structure of the deep learning submodel is more complex than the model structures of the three-dimensional model identifier model and the vehicle type matching submodel;
performing size normalization processing on the model area corresponding to the first vehicle type result and the model area corresponding to the second vehicle type result; taking a model area corresponding to the first vehicle type result after normalization processing as a first target model area, and taking a model area corresponding to the second vehicle type result after normalization processing as a second target model area; alternatively, the set partial region in the model region corresponding to the first vehicle type result after the normalization processing is set as the first target model region, and the set partial region in the model region corresponding to the second vehicle type result after the normalization processing is set as the second target model region.
CN202111242342.0A 2021-10-25 2021-10-25 Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system Active CN113688805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111242342.0A CN113688805B (en) 2021-10-25 2021-10-25 Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111242342.0A CN113688805B (en) 2021-10-25 2021-10-25 Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system

Publications (2)

Publication Number Publication Date
CN113688805A true CN113688805A (en) 2021-11-23
CN113688805B CN113688805B (en) 2022-02-15

Family

ID=78587838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111242342.0A Active CN113688805B (en) 2021-10-25 2021-10-25 Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system

Country Status (1)

Country Link
CN (1) CN113688805B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863688A (en) * 2022-07-06 2022-08-05 深圳联和智慧科技有限公司 Intelligent positioning method and system for muck vehicle based on unmanned aerial vehicle
CN114898236A (en) * 2022-07-14 2022-08-12 深圳联和智慧科技有限公司 Muck vehicle monitoring method and system based on unmanned aerial vehicle and cloud platform
CN115083167A (en) * 2022-08-22 2022-09-20 深圳市城市公共安全技术研究院有限公司 Early warning method, system, terminal device and medium for vehicle leakage accident
CN115457777A (en) * 2022-09-06 2022-12-09 北京商海文天科技发展有限公司 Specific vehicle traceability analysis method
CN115937907A (en) * 2023-03-15 2023-04-07 深圳市亲邻科技有限公司 Community pet identification method, device, medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446949A (en) * 2016-09-26 2017-02-22 成都通甲优博科技有限责任公司 Vehicle model identification method and apparatus
CN110176022A (en) * 2019-05-23 2019-08-27 广西交通科学研究院有限公司 A kind of tunnel overall view monitoring system and method based on video detection
CN110909692A (en) * 2019-11-27 2020-03-24 北京格灵深瞳信息技术有限公司 Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN111401162A (en) * 2020-03-05 2020-07-10 上海眼控科技股份有限公司 Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium
CN113055543A (en) * 2021-03-31 2021-06-29 上海市东方医院(同济大学附属东方医院) Construction method of digital twin command sand table of mobile hospital

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446949A (en) * 2016-09-26 2017-02-22 成都通甲优博科技有限责任公司 Vehicle model identification method and apparatus
CN110176022A (en) * 2019-05-23 2019-08-27 广西交通科学研究院有限公司 A kind of tunnel overall view monitoring system and method based on video detection
CN110909692A (en) * 2019-11-27 2020-03-24 北京格灵深瞳信息技术有限公司 Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN111401162A (en) * 2020-03-05 2020-07-10 上海眼控科技股份有限公司 Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium
CN113055543A (en) * 2021-03-31 2021-06-29 上海市东方医院(同济大学附属东方医院) Construction method of digital twin command sand table of mobile hospital

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863688A (en) * 2022-07-06 2022-08-05 深圳联和智慧科技有限公司 Intelligent positioning method and system for muck vehicle based on unmanned aerial vehicle
CN114898236A (en) * 2022-07-14 2022-08-12 深圳联和智慧科技有限公司 Muck vehicle monitoring method and system based on unmanned aerial vehicle and cloud platform
CN115083167A (en) * 2022-08-22 2022-09-20 深圳市城市公共安全技术研究院有限公司 Early warning method, system, terminal device and medium for vehicle leakage accident
CN115457777A (en) * 2022-09-06 2022-12-09 北京商海文天科技发展有限公司 Specific vehicle traceability analysis method
CN115457777B (en) * 2022-09-06 2023-09-19 北京商海文天科技发展有限公司 Specific vehicle traceability analysis method
CN115937907A (en) * 2023-03-15 2023-04-07 深圳市亲邻科技有限公司 Community pet identification method, device, medium and equipment

Also Published As

Publication number Publication date
CN113688805B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN113688805B (en) Unmanned aerial vehicle-based unlicensed muck vehicle identification method and system
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
CN106127107A (en) The model recognizing method that multi-channel video information based on license board information and vehicle's contour merges
CN107862340A (en) A kind of model recognizing method and device
CN110555347B (en) Vehicle target identification method and device with dangerous cargo-carrying behavior and electronic equipment
KR102073929B1 (en) Vehicle Emergency Log system using blockchain network
CN110232827B (en) Free flow toll collection vehicle type identification method, device and system
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN110929589A (en) Method, device, computer device and storage medium for vehicle feature recognition
CN111860219B (en) High-speed channel occupation judging method and device and electronic equipment
CN115810134A (en) Image acquisition quality inspection method, system and device for preventing car insurance from cheating
CN114240816A (en) Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN113297939A (en) Obstacle detection method, system, terminal device and storage medium
CN116721396A (en) Lane line detection method, device and storage medium
CN109360137B (en) Vehicle accident assessment method, computer readable storage medium and server
CN111161542B (en) Vehicle identification method and device
CN115797690A (en) Dense target detection and identification method, device, equipment and storage medium
CN113326831B (en) Method and device for screening traffic violation data, electronic equipment and storage medium
CN113723258B (en) Dangerous goods vehicle image recognition method and related equipment thereof
CN112686136B (en) Object detection method, device and system
CN114724107A (en) Image detection method, device, equipment and medium
CN114972731A (en) Traffic light detection and identification method and device, moving tool and storage medium
CN114693722A (en) Vehicle driving behavior detection method, detection device and detection equipment
CN111191603B (en) Method and device for identifying people in vehicle, terminal equipment and medium
CN113095311A (en) License plate number recognition method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant