CN114387533A - Method and device for identifying road violation, electronic equipment and storage medium - Google Patents

Method and device for identifying road violation, electronic equipment and storage medium Download PDF

Info

Publication number
CN114387533A
CN114387533A CN202210035827.0A CN202210035827A CN114387533A CN 114387533 A CN114387533 A CN 114387533A CN 202210035827 A CN202210035827 A CN 202210035827A CN 114387533 A CN114387533 A CN 114387533A
Authority
CN
China
Prior art keywords
image
road
lane
pod
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210035827.0A
Other languages
Chinese (zh)
Inventor
林凡雨
崔书刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuandu Internet Technology Co ltd
Original Assignee
Beijing Yuandu Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuandu Internet Technology Co ltd filed Critical Beijing Yuandu Internet Technology Co ltd
Priority to CN202210035827.0A priority Critical patent/CN114387533A/en
Publication of CN114387533A publication Critical patent/CN114387533A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The application provides a method and a device for identifying road violation, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring a first image captured when the unmanned aerial vehicle cruises a road; processing the first image formation road data and information of an object located on a road; matching violation constraint information according to the information of the object and the road data, and obtaining a matching result; when the object is determined to be in violation according to the matching result, controlling a pod to track the object and take a second image; and acquiring the identification of the object according to the second image.

Description

Method and device for identifying road violation, electronic equipment and storage medium
Technical Field
The present application relates to the field of intelligent traffic monitoring technologies, and in particular, to a method and an apparatus for identifying road violation, an electronic device, and a storage medium.
Background
In the prior art, the method of obtaining evidence of violation by installing fixed cameras on two sides of a road and adopting the cameras is generally adopted for obtaining evidence of violation, and the method of taking pictures and obtaining evidence of violation vehicles by adopting manual watching is also adopted. The camera is fixed on two sides of a road, the camera is used for shooting pictures of passing vehicles on the road, and the pictures of the passing vehicles on the road are subjected to violation identification to judge whether the vehicle violates the regulations, so that the method is the most effective and convenient method for obtaining the evidence of the violation of the road at present. The method for obtaining the evidence of the violation of the road by installing the camera or obtaining the evidence of the violation of the road by adopting manpower resources can ensure that a user can safely drive according to the road rules in the process of driving a vehicle and can also play a role in warning people who do not comply with the road rules.
In the process of realizing the prior art, the inventor finds that:
the method for collecting the evidence of the vehicle owners with road violation by installing the cameras on the two sides of the road is limited to the method that the cameras shoot a single lane or the vehicles running in the same direction, and the driving behaviors with road violation exist outside the shooting range, wherein the driving behaviors are difficult to collect the evidence of the road violation on the road sections where the cameras cannot shoot.
Therefore, it is necessary to provide a technical solution that can perform road violation forensics for all lane areas.
Disclosure of Invention
The embodiment of the application provides a technical scheme for road violation identification, which is used for solving the problem that the range of evidence collection for road violation or road violation by adopting human resources is limited by installing a camera in the prior art.
Specifically, the method for identifying the road violation comprises the following steps: acquiring a first image captured when the unmanned aerial vehicle cruises a road; processing the first image forming road data and a letter of an object located on the road; matching violation constraint information according to the information of the object and the road data, and obtaining a matching result;
when the object is determined to be in violation according to the matching result, controlling a pod to track the object and take a second image; and acquiring the identification of the object according to the second image.
The embodiment of the present application further provides a device for identifying a road violation, including: the acquisition module is used for acquiring a first image captured when the unmanned aerial vehicle cruises a road; a processing module for processing the first image forming road data and information of an object located on a road; the matching module is used for matching violation constraint information according to the information of the object and the road data and obtaining a matching result; the control module is used for controlling the pod to track the object and shoot a second image when the object is determined to be in violation according to the matching result; and the identification module is used for acquiring the identification of the object according to the second image.
An embodiment of the present application further provides an electronic device, including: a memory storing computer readable instructions; and the processor reads the computer readable instructions stored in the memory to execute any implementation mode of the method for identifying the road violation.
Embodiments of the present application also provide a storage medium having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform any one implementation of a method for road violation identification.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects: through the technical scheme of road violation discernment, can carry out the road violation to the vehicle of the whole within ranges of road that unmanned aerial vehicle shot and collect evidence, when there is the driving behavior in violation of rules and regulations as the vehicle, unmanned aerial vehicle passes through the camera of nacelle and tracks it and shoots, and then collects evidence to the vehicle information of vehicle in violation of regulations.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a structure of road violation identification provided in an embodiment of the present application;
fig. 2 is a flowchart of a method for identifying road violations according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an image captured by an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 4 is a schematic view of road data corresponding to the schematic view shown in FIG. 3;
fig. 5 is a schematic structural diagram of a device for identifying a road violation according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Embodiments of the present invention provide a method for identifying a road violation, which may be performed by any device with computing capability, such as a terminal or a server. In the embodiment of the invention, an unmanned aerial vehicle is taken as an execution main body for explanation, a processing module for processing and calculating images, generally a TX2 module, is arranged in the unmanned aerial vehicle, and a pod is arranged in the unmanned aerial vehicle, and a camera is arranged in the pod. Fig. 1 is a structural block diagram of the unmanned aerial vehicle according to the embodiment of the present application, when the unmanned aerial vehicle is on a cruising road, a pod captures a first image and sends the first image to a TX2 module, and the TX2 module captures the first image in a video captured when the unmanned aerial vehicle is on the cruising road. The TX2 module processes the first image forming road data and an object located on the road, matching violation constraint information according to the object and the road data, obtaining a matching result, and when the object is determined to be in violation according to the matching result, the TX2 module obtains coordinates of the object and a second zoom multiple and sends the coordinates and the second zoom multiple to the pod. The pod adjusts the attitude angle of the pod according to the pixel coordinates to enable the object to be in the center of the video, zooms from the first zoom multiple to the second zoom multiple at a constant speed, then shoots a second image and sends the second image to the TX2 module, and the TX2 module acquires the identification of the object according to the second image.
Referring to fig. 2, a flowchart of a method for identifying a road violation according to an embodiment of the present application includes at least the following steps:
s210, acquiring a first image captured when the unmanned aerial vehicle cruises the road.
Specifically, when the unmanned aerial vehicle executes a cruise task according to a cruise path, a cruise road video is shot to obtain a first image. The first image may be one frame of image in a video shot when the unmanned aerial vehicle is cruising on a road, or may be a plurality of frames of images extracted periodically.
It can be understood that, when the unmanned aerial vehicle is cruising on a road, the unmanned aerial vehicle firstly needs to take off successfully from the starting point where the unmanned aerial vehicle is located, and fly to an overhead area where an image of the road to be cruising can be acquired. Then, the unmanned aerial vehicle can carry out cruising and shooting work above the road to be cruising. And the road to be cruising is a target road for cruising of the unmanned aerial vehicle. When the cruise task is finished, the unmanned aerial vehicle can return to the air and land to a designated recovery point or a stop point. However, the endurance of the drone is limited, which directly affects the total flight time or the total flight mileage of the drone, and thus the road cruising mileage of the drone. And the road cruising mileage of the unmanned aerial vehicle can directly influence the area size of the road violation identification. Therefore, the cruising work content of the unmanned aerial vehicle needs to be planned in advance to ensure that the unmanned aerial vehicle can work efficiently under the limited cruising ability. The unmanned aerial vehicle cruise work content comprises all work content related to flight and cruise during the period from takeoff to return flight landing of the unmanned aerial vehicle. The cruising work content of the unmanned aerial vehicle planned in advance is the cruising strategy preset for the unmanned aerial vehicle. Like this, unmanned aerial vehicle can be under the condition that does not influence normal back journey, carries out the cruise task of longer mileage. When the cruise strategy is set, the unmanned aerial vehicle cruises on the road according to the cruise strategy.
In the embodiment of the invention, the cruise strategy is at least one of a preset cruise route, a cruise speed and a cruise direction.
In the embodiment of the invention, the cruising route is a flight path of the unmanned aerial vehicle in the cruising process, and at least comprises a starting point and an ending point when the unmanned aerial vehicle cruises a road. Through formulating the route of cruising, the emergence of the phenomenon of repeatedly cruising of many unmanned aerial vehicles to same road at the same moment can be effectively avoided to and the efficiency of road that cruises is promoted.
It should be noted that the cruise route is a total cruise path for cruising on a plurality of roads, and the cruise route should at least include a start point and an end point for cruising on each road. At this time, the cruising route may further include a flight path for the unmanned aerial vehicle to transit between the adjacent road segments. Through setting up this route of cruising, unmanned aerial vehicle can shift to next section road according to the flight path that shifts between the adjacent section road, has avoided the emergence of unmanned aerial vehicle detour phenomenon, can make unmanned aerial vehicle work according to the optimal route of cruising, has reduced the time that unmanned aerial vehicle cruises and shifts, has promoted unmanned aerial vehicle efficiency of cruising.
In the embodiment of the invention, the cruising speed is the flying speed of the unmanned aerial vehicle when cruising on a road. If unmanned aerial vehicle flying speed is too fast, can make flight stability relatively poor, the fuselage easily rocks. Therefore, the shooting angle of the unmanned aerial vehicle changes along with the change of the shooting angle, so that a stable video cannot be acquired, the monitoring of all road areas is not facilitated, and the accuracy of a road violation identification result is influenced. If unmanned aerial vehicle flying speed is too slow, because unmanned aerial vehicle duration is limited, this makes unmanned aerial vehicle cruise road mileage reduce, cruises inefficiency. At this moment, be unfavorable for unmanned aerial vehicle to carry out the cruise of long road scope. Therefore, in order to guarantee the shooting stability of the unmanned aerial vehicle and also consider the cruising efficiency, the cruising speed of the unmanned aerial vehicle needs to be designed.
In the embodiment of the invention, the cruising direction of the unmanned aerial vehicle can be planned. When the unmanned aerial vehicle cruises the road, the cruiser direction of the unmanned aerial vehicle directly influences the presenting range of the road data in the obtained image. If unmanned aerial vehicle direction of cruising keeps with the direction that extends along the road, can obtain more road data according to the image that the shooting obtained. If there is certain angle between the direction that unmanned aerial vehicle cruising direction and road extended, only can obtain less road data according to the image of shooing obtaining, this is because the reason of shooting angle, there is the road region in the image that obtains and is sheltered from the problem that the thing covered, there is the vision blind area promptly, be unfavorable for carrying out comprehensive control to the road, and through the direction planning of cruising to unmanned aerial vehicle, there is the blind area in the image that unmanned aerial vehicle shooed, the realization is to the comprehensive control of road, promote the efficiency of discerning the road violation. In the embodiment of the invention, the cruising direction of the planning unmanned aerial vehicle is consistent with a certain passing direction of a road. For example, if the cruise target road is a one-way traffic road, the cruise direction and the traffic direction are kept consistent. If the cruising target road is a bidirectional passing road, the cruising target road is consistent with a certain passing direction in the road according to the set cruising direction.
The first image captured when the unmanned aerial vehicle cruises the road can be obtained in real time, and each frame of the video captured by the unmanned aerial vehicle can also be obtained at an interval of n seconds and used as the first image. It can be understood that the specific time interval between the capturing of the first image by the drone on the cruising road obviously does not constitute a limitation to the scope of the present application.
According to the embodiment of the invention, a first image in a video captured when the unmanned aerial vehicle is cruising on a road is obtained, the ground pitch angle of a pod of the unmanned aerial vehicle can be controlled to be a target pitch angle according to the flight height of the unmanned aerial vehicle, and the zoom multiple of the pod of the unmanned aerial vehicle is controlled to be a first zoom multiple; keeping the target pitch angle and the first zoom multiple, and acquiring a first image in a video captured when the unmanned aerial vehicle cruises on a road; the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
It will be appreciated that the sharpness requirements of the first image may be preset in the drone. The first image can be set to different resolutions according to the requirements of a user so as to meet the requirements of acquiring road data and road target information, and meanwhile, the first image does not occupy excessive storage of the unmanned aerial vehicle.
It can be appreciated that the drone grabs, primarily through a camera fixedly mounted to the drone pod. Wherein, the camera is initially towards the direction that defaults to be parallel to the afterbody or the head of unmanned aerial vehicle, promptly, towards the direction of unmanned aerial vehicle course or towards the direction that deviates from unmanned aerial vehicle course. By adjusting the orientation of the pod, the grabbing angle of the camera is changed accordingly. The unmanned aerial vehicle nacelle should have certain contained angle with ground to guarantee that unmanned aerial vehicle can grab the road details on ground in the air.
Specifically, the ground pitch angle of a pod of the unmanned aerial vehicle is controlled to be a target pitch angle according to the flying height of the unmanned aerial vehicle. The ground pitch angle of the nacelle is the angle between the direction of the camera in the nacelle and the horizontal direction of the ground. Because unmanned aerial vehicle is when snatching the video according to the road of cruising, the flight height of unmanned aerial vehicle's the road of cruising is uncertain, so when unmanned aerial vehicle was patrolled according to the road of cruising, can be according to unmanned aerial vehicle flight height, the angle of pitch to the ground of control unmanned aerial vehicle's nacelle. In particular embodiments provided herein, the target pitch angle ranges from-20 ° to-40 °.
It will be appreciated that if the nacelle pitch angle is small, images of the road at greater distances may be acquired. However, the road information error in the first image obtained at this time is large due to the presence of dust particles in the air, atmospheric refraction, road surface reflection, and the like. For example, the road distribution line in a distant area cannot be accurately identified, and at this time, the identification accuracy of identifying a road violation based on the acquired image is low. If the nacelle pitch angle is large, only the road image of the near area can be acquired. In the limited endurance mileage of the unmanned aerial vehicle, the total cruising mileage of the road of the unmanned aerial vehicle is reduced, and the cruising efficiency of the unmanned aerial vehicle is influenced. Therefore, the target pitch angle is set in the range of-20 ° to-40 °, i.e., 20 ° to 40 ° depression. When the unmanned aerial vehicle is cruising on the road, the pitch angle of the nacelle is a certain value between the target pitch angle setting ranges. It will be appreciated that the specific values of the pitch angle of the nacelle within the target pitch angle setting range are not intended to limit the scope of the present application.
It should be noted that the roll angle and the heading angle of the nacelle generally do not need to be adjusted, i.e., the initial angle is maintained. And controlling the zooming multiple of a pod of the unmanned aerial vehicle to be a first zooming multiple according to the flight height of the unmanned aerial vehicle. The zoom factor of the pod is the zoom factor of the camera in the pod, wherein the first zoom factor is between 2 times zoom and 10 times zoom. Because unmanned aerial vehicle is when snatching the video according to the route of cruising, the flight height of unmanned aerial vehicle's the route of cruising is uncertain, so unmanned aerial vehicle is when cruising according to the route of cruising, can be according to unmanned aerial vehicle flight height, the multiple of zooming of control unmanned aerial vehicle nacelle. The larger the zoom factor, the farther the scene can be grabbed. When the unmanned aerial vehicle patrols the road, if the low zoom multiple is always kept, the road image in the range as large as possible can be obtained. However, this results in a smaller proportion of road regions in the acquired image, which leads to a lower image road recognition accuracy. If keep the low multiple that zooms all the time, the unmanned aerial vehicle visual angle narrows down, and the camera field of vision is restricted. Therefore, when the first zoom factor value is set, the influence of the flight height of the unmanned aerial vehicle and the size of the nacelle pitch angle needs to be fully considered. In this way, the road area proportion in the obtained image is moderate, and the image at least comprises a single road image. After the target pitch angle of a pod of the unmanned aerial vehicle and the first zoom multiple of the pod of the unmanned aerial vehicle are controlled according to the flying height of the unmanned aerial vehicle, the unmanned aerial vehicle keeps the target pitch angle and the first zoom multiple, and a first image in a video captured when the unmanned aerial vehicle cruises on a road is acquired. In a preferred embodiment provided by the present application, the TX2 module of the drone determines the degree of pitch angle and the first zoom factor in the nacelle based on the drone's altitude.
The first image forming road data and information of the object located on the road are processed S220.
In a preferred embodiment of the present application, when the first image is processed to form the road data, the lane lines and the vertexes of the lane lines in the first image may be identified, and the road data may be generated from the lane lines, the vertexes of the lane lines, and the road distribution use data corresponding to the first image. The road data at least includes lane distribution information in the road and information of each lane line constituting the road.
It is understood that the road data refers to information related to the distribution of the road, and the road data includes at least information of the distribution of the lanes and the respective lane lines constituting the road. The lane distribution conditions include emergency lanes, passing lanes, bus lanes and the like on a highway, and the distribution details of motor lanes, non-motor lanes and sidewalks on an urban road; the information of each lane constituting the road may include coordinates of a lane line constituting each road and whether the lane line is a dotted line or a solid line. Note that the road data may further include: geographical location information where the road is located in the first image, and extending direction information of the road in the first image, and so on.
In the embodiment of the present invention, when determining the lane lines and the vertices of the lane lines in the first image, that is, the virtual and real information of the lane lines and the coordinates of the vertices, may be determined according to recognition algorithms such as lane line detection based on color masks and line detection and lane line detection based on a sliding window. The invention is not limited thereto but may also be marked manually, for example.
For example, the lane lines and the vertices of each lane line may be as follows:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
It should be noted that, in practice, there may be a case where the lane line portion is a solid line and a portion is a dashed line, and the determined virtual-real case of the lane line and the vertex of each lane line may include coordinates of a junction of the dashed line and the solid line, for example:
line1 vertex coordinates [ (x1, y1), (x2, y2), dashed/solid (xn, yn) ];
where (xn, yn) may be the intersection of the dashed line and the solid line, and the dashed/solid line indicates that the part near (x1, y1) is the dashed line and the part near (x2, y2) is the solid line.
It should also be noted that when determining the lane lines and the vertices of the lane lines in the first image, lanes may also be pre-assigned as follows:
a first lane:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
It is to be noted that the position coordinates of the vertices refer to the position coordinates, i.e., pixel coordinates, of the boundary points of the lane lines in the image, and not the start points or the end points of the actual roads. Each lane needs to be defined by two lane lines, so that the road data of each road is formed by the boundary points of two adjacent lane lines. In this way, each road in the first image is made to have unambiguous geographic coordinates. Therefore, according to the first image captured by the unmanned aerial vehicle, the virtual and real information including the lane line, the vertex of each lane line and each lane line can be obtained.
In the embodiment of the invention, after the first image captured when the unmanned aerial vehicle is cruising on the road is obtained, the road distribution use data corresponding to the first image can be obtained from the existing map data according to the cruising route in the preset cruising strategy and/or the corresponding actual road identified by the captured image. For example, if the first image is a capture screen of a link on a certain expressway, the road distribution use data corresponding to the link can be acquired from the map data. However, the present invention is not limited to this, and road data may be obtained by presetting road distribution use data and matching the preset road distribution use data with the acquired lane lines and the vertexes of the lane lines.
In the embodiment of the present invention, the road distribution usage data may include distribution conditions and usage rules of each road, for example, as shown in fig. 3, on a certain expressway, the innermost lane (generally, the leftmost lane) is the highest speed lane, and the outermost lane (generally, the rightmost lane) is the emergency lane. The road usage rules may include speed limit sections of various lanes, usage rules of bus-only lanes and the like, for example, 7-9 points and 17-19 points of a weekday are set, and the innermost lane is a bus-only lane and the like.
In the embodiment of the invention, after the lane line, the vertex of the lane line and the road distribution use data are obtained, the lane line, the vertex of the lane line and the road distribution use data are matched to generate the road data. And when the matching is carried out, matching is carried out according to the position coordinates of the vertexes of the lane lines and the distribution condition of each road in the road distribution use data, so as to determine the road data, wherein the road data at least comprises the lane distribution information in the road and the information of each lane forming the road.
For example, the image shown in fig. 3 is processed to form road data as shown in fig. 4. In fig. 4, each lane line has boundary points (only line1 and line2 are taken as examples in fig. 4, and the vertices of the other lane lines are not shown), i.e., vertices, at the uppermost and lowermost positions in the image. The lane lines and the vertices of the lane lines are determined from the image. For example, line1, [ (x1, y1), (x2, y2), solid line ]; wherein, (x1, y1) is boundary point position information of Line1 at the lowest in the image, and (x2, y2) is boundary point position information of Line1 at the top in the image; line2, [ (x3, y3), (x4, y4), solid Line ] was obtained. And the road distribution use data shows that the distribution condition of the road from right to left is respectively: and matching the emergency lane, the lowest speed lane, the passing lane and the highest speed lane with the road distribution use data corresponding to the image according to the vertex coordinates of Line1 and Line2 shown in fig. 4 to obtain a schematic diagram of the road data shown in fig. 4, wherein the road composed of Line1 and Line2 is the emergency lane. Similarly, the lane distribution information can be obtained by using the data for the lane lines, the vertices of the lane lines, and the corresponding road distribution in fig. 3. For example, the vertex position coordinates of the emergency lane are represented as Line1: (x1, y1), (x2, y2), solid line; line2: (x3, y3), (x4, y4), solid line. Namely:
emergency lane:
line1: [ (x1, y1), (x2, y2), solid line ];
line2: [ (x3, y3), (x4, y4), solid line ].
Since two adjacent roads share the lane Line located in the middle, the vertex position coordinates of the lowest speed lane adjacent to the emergency lane also include the position information of the lane Line 2.
It will be appreciated that if the pre-assigned lanes, lane lines and vertices of each lane line are obtained as follows:
a first lane:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
Matching the data with road distribution use data to obtain road data as follows:
emergency lane:
line1: (x1, y1), (x2, y2), solid line;
line2: (x3, y3), (x4, y4), solid line.
Namely, the first lane is identified as the emergency lane.
In an embodiment of the present invention, processing the information of the object whose image is formed on the road specifically includes: the image is processed to form category attribute information and location coordinates of objects located on the road. That is, the information of the object may include: the category attributes and the location coordinates of the objects located on the road. For example, the type of object located on the road is a bus, and the position coordinates of the bus.
It should be noted that, for example, traffic regulations for cars, trucks, buses are different because different types of motor vehicles are subject to different traffic regulations. Road violation identification for motor vehicles is therefore more complex than for pedestrians and non-motor vehicles. Therefore, the road data should include at least the distribution information of the lanes in the road and the information of the respective lanes constituting the road. The information of the object needs to include a category attribute and a position coordinate of the object located on the road.
In the embodiment of the present invention, the position coordinates of the object refer to pixel coordinates of the object on the image, and the category attribute of the object, such as the types of pedestrians, non-motor vehicles, and motor vehicles. Wherein, the traffic laws and regulations applicable to different types of motor vehicles are different, for example, the traffic laws and regulations applicable to cars, trucks and buses are different. Road violation identification for motor vehicles is therefore more complex than for pedestrians and non-motor vehicles. Therefore, when identifying a road violation, it is necessary to determine the class attribute of an object located on a road in an image.
Specifically, determining the position coordinates of the object in the first image may be obtained by performing Bounding Box regression on the image. The position coordinates of the object in the first image correspond to the label (x, y, w, h). In the formula, x and y are coordinates of the central point of a quadrangle where the object is located; w and h are the width and height of the quadrangle where the object is located respectively. Then, the suspected classes of the object in the first image and the confidence degrees of the suspected classes are determined through the trained classification detection model. Wherein the categories include at least cars, trucks, heavy machinery vehicles, buses, and the like. And finally, selecting the category corresponding to the maximum confidence coefficient from the confidence coefficients corresponding to the suspected categories as the category attribute of the information of the object positioned on the road in the first image. In this way, the class attribute of the object in the first image is obtained.
In the embodiment of the invention, the classification detection model can be obtained by carrying out negative feedback optimization training on the neural network. And performing negative feedback optimization of the classification detection model, wherein a public image set for training is required. The public image set for training at least comprises a plurality of car image elements, a plurality of truck image elements, a plurality of heavy machinery vehicle image elements, a plurality of bus image elements and other image elements with different types of attributes. Therefore, the classification detection model obtained through negative feedback optimization can at least identify objects with different classification attributes such as medium and small cars, trucks, heavy machinery vehicles, buses and the like. In addition, several pedestrian image elements, several non-motor vehicle image elements, several mission-specific vehicle elements, etc. may also be included. Correspondingly, the classification detection model obtained through negative feedback optimization can also identify objects with different classification attributes, such as pedestrians, non-motor vehicles, special task vehicles and the like in the image. The special task vehicle is a fire truck, an ambulance, a police car and other vehicles for executing tasks. It is to be understood that the specific attribute classes of the object that can be identified by the classification detection model described herein are clearly not to be construed as limiting the specific scope of the present application.
And S230, matching violation constraint information according to the information of the object and the road data, and acquiring a matching result.
Specifically, after the first image is processed, the generated multiple objects such as vehicles on the road, pedestrians on the road, and road information are matched with the violation constraint information. The violation constraint information is that a vehicle on a road violates a road-driving rule or a pedestrian on the road violates a road-driving rule. The violation constraint information in a specific practical application scenario may be expressed as: the rule that large vehicles occupy overtaking lanes, the rule that vehicles occupy emergency lanes, the rule that vehicles occupy special lanes for buses, the rule that vehicles occupy ultra-high speed or ultra-low speed and the rule that pedestrians travel in motor vehicle lanes are adopted. In contrast, violation constraint information corresponding to the violation constraint information, that is, violation constraint information that a vehicle traveling on a road complies with a travel rule or that a pedestrian traveling on a road complies with a travel rule. It is understood that the specific form represented by the object matching violation constraint information obviously does not limit the scope of protection of the present application.
It should be noted that when the object is determined to be a special vehicle such as a fire truck, an ambulance, or a police car that performs the task, the step of matching violation constraint information according to the information of the object and the road data is not performed, or when the object is determined to be a special vehicle such as a fire truck, an ambulance, or a police car that performs the task, the step of matching the violation constraint information according to the information of the object and the road data is set, and the unmatched matching result is obtained.
In the embodiment of the invention, the violation constraint information is information of relevant road behaviors violating the road traffic regulations and can comprise at least one of constraint information of a large vehicle occupying overtaking lane, constraint information of a vehicle occupying emergency lane, constraint information of a bus occupied by the vehicle occupying special lane, constraint information of a compaction line and constraint information of ultra-high speed and ultra-low speed of the vehicle. The large vehicle may include a truck, a heavy machinery vehicle, and the like.
In the embodiment of the invention, when the violation constraint information is prepared, the road violation of the motor vehicle needs to be fully considered, and the violation constraint information is matched according to the class attribute, the position coordinate and the road data of the object positioned on the road, so that the accuracy of identifying the road violation of the motor vehicle can be ensured.
Different types of motor vehicles are different in applicable road rules, and different lanes are different in applicable road rules. For example, the road rules applicable to cars, trucks, and buses are different, and the road rules applicable to emergency lanes and first lanes adjacent to the emergency lanes are different, so that the lane where the object is located needs to be determined in order to determine violation constraint information of the lane where the object is located, and thus the information of the road recorded in the first image and the object data are matched with the violation constraint information of the lane where the object is located, so as to obtain a matching result.
In the embodiment of the invention, the lane where the object is located can be determined according to the information of the object and the road data, the violation constraint information is matched according to the lane where the object is located, and the matching result is obtained.
According to the obtained road data in the first image and the information of the object, the lane where the object is located can be determined, so that the road behaviors of all the objects in the road corresponding to the first image are obtained, the road behaviors of the lane where the object is located are matched with the violation constraint information, and a matching result of whether the road behaviors of the object are consistent with the violation constraint information or not can be obtained. And if the road behaviors meeting the violation constraint information exist, acquiring a matching result whether the violation constraint information is matched with the corresponding violation constraint information. And if the road behaviors meeting the violation constraint information do not exist, acquiring a matching result inconsistent with the violation constraint information, namely the first image does not have the related road behaviors violating the road traffic regulations.
In the embodiment of the present invention, the lane in which the object is located is determined according to the information of the object and the road data, and the determination may be performed by an algorithm that determines whether a point is within a polygon. The algorithm for judging whether the point is in the polygon may be a Crossing Number algorithm or a wining Number surrounding Number algorithm.
In the embodiment of the invention, the area enclosed by each lane line corresponding to each lane and the boundary of the first image is used as a polygon, the central point of the object is used as a point for judging whether the point is in the polygon, and the lane in which the object is positioned can be determined by judging whether the central point of the object is in the polygon. For example, when the lane where the object is located is determined by the cross Number algorithm, the result of calculating the Number of intersections between each object and each polygon defined by the road data is obtained. When the calculation result of the number of intersections of the object and a certain polygon is odd, the object is positioned in the polygon. I.e. the object is located within the lane represented by the polygon. Similarly, when the lane where the object is located is judged through the wining Number surrounding algorithm, the surrounding Number calculation result of the object and each polygon defined by the road data is correspondingly obtained. When the calculation result of the number of circles of a certain polygon relative to the object is a value other than 0, the object is positioned in the polygon. I.e. the object is located within the lane represented by the polygon.
It should be noted that, in addition to using the center point of the object as the point for determining whether the point is in the polygon, each vertex of the quadrangle where the object is located may be used as a point in the algorithm for determining whether the point is in the polygon. At this time, it can be determined that the object is in a certain lane only if all the vertices of the quadrangle where the object is located are in the same polygon. The quadrangle where the object is located is a quadrangle which is obtained by the unmanned aerial vehicle image through Bounding Box regression processing and is used for identifying the road object. I.e. a quadrilateral for framing objects, to represent objects in the road.
In the embodiment of the present invention, the coordinates of the center point of the object are generally used as the points in the algorithm for determining whether the point is in the polygon, because this method is more accurate than the method for determining whether each vertex of the quadrangle where the object is located is in the polygon. In general, the range covered by the quadrangle used for selecting the object is far larger than the actual range of the object, and therefore, there may be a case that a certain vertex of the quadrangle is not in the polygon corresponding to a certain lane, but the object selected by the quadrangle is actually in the polygon corresponding to the lane.
It should be noted that there are typically 3 results in determining whether a point is within a polygon: points are inside the polygon, points are outside the polygon and points are on the polygon, where a point is on the polygon that includes a specific edge of the polygon where the point is located.
According to the embodiment of the invention, when the violation constraint information is vehicle ultra-high speed and ultra-low speed constraint information, the moving speed of the object can be calculated according to the information of the object and road data formed by a plurality of continuous frame images, the lane where the object is located is determined according to the information of the object and the road data, the moving speed of the object is matched with the speed limit value interval of the lane, and the matching result is obtained.
In the embodiment of the present invention, the ultra high speed is understood as a maximum speed limit higher than a prescribed traveling speed of a corresponding lane, and the ultra low speed is understood as a minimum speed limit lower than the prescribed traveling speed of the corresponding lane. At this time, according to the information of the object in the first image and the road data, the violation constraint information of the ultra-high speed and the ultra-low speed of the lane where the object is located is matched, and the matching result of whether the ultra-high speed and the ultra-low speed constraint information of the vehicle is matched can be obtained.
It will be appreciated that the vehicle is moving within the traffic lane and the speed of movement of the object in the image cannot generally be determined from a single image taken. Therefore, when the violation constraint information is the vehicle ultra-high speed and ultra-low speed constraint information, before matching, when the moving speed of the object in the first image needs to be determined, several continuous frames of images captured by the unmanned aerial vehicle need to be acquired. In this way, the positions of the objects in the image at different times can be determined. And determining the change of the pixels of each target vehicle through multi-target tracking according to the obtained continuous frames of images. Afterwards, based on unmanned aerial vehicle's speed, camera internal reference, camera gesture and zoom multiple, can convert the change of pixel into the actual speed of target vehicle. After the lane where the object is located is determined, the moving speed of the object is matched with the speed limit value interval of the lane where the object is located, and then the matching result of the violation constraint information of the ultra-high speed or ultra-low speed of the vehicle can be obtained. When the moving speed of the object is higher than the highest speed limit value of the lane, the matching result matched with the vehicle ultra-high speed violation constraint information can be obtained. When the moving speed of the object is lower than the lowest speed limit value of the lane, the matching result matched with the vehicle ultra-low speed violation constraint information can be obtained.
It should be noted that, in the embodiment of the present invention, the setting of the number of frames of several consecutive image frames actually depends on several factors: the actual operation performance of the device for identifying the road violation, the frame rate of the video stream grabbed by the unmanned aerial vehicle and the average remaining frame number of the vehicle in the image are executed.
For example, the device for road violation identification may process one frame every 100ms, the original video stream is 30Hz, and the vehicle may remain 90 frames in the image, so the vehicle may be theoretically seen in the image within 3 seconds, and at most 3000/100-30 frames of continuous determination may be required. But takes into account two factors: when a vehicle just enters an image edge and leaves the image edge, the recognition effect is unstable, and after the recognition is finished, the vehicle needs to be subjected to operations of capturing, storing evidence and the like, and the number of frames should be halved, namely 15 frames are used as a reasonable parameter.
In the embodiment of the invention, after the lane where the object is located is determined, the lane constraint information can be matched with the overtaking lane constraint information occupied by the large-scale vehicle, the emergency lane constraint information occupied by the vehicle, the special lane constraint information occupied by the vehicle and the solid line constraint information in the violation constraint information so as to obtain the matching result of the violation constraint information.
In the embodiment of the invention, the fact that the large vehicle occupies the overtaking lane means that the continuous driving time of the large vehicle in the overtaking lane exceeds the preset time. The overtaking lane constraint information of the large vehicle means that if the vehicle type is the large vehicle, the lane where the vehicle is located is the overtaking lane, and the driving time of the vehicle in the overtaking lane meets the preset time. When the constraint information is matched with the constraint information of the overtaking lane occupied by the large vehicle, if the object is the large vehicle, the lane where the object is located is determined to be the overtaking lane, and the time of occupying the overtaking lane is determined to meet the preset time of occupying the overtaking lane by the large vehicle, the constraint information of occupying the overtaking lane by the large vehicle is determined to be matched. It is noted that the time for a large vehicle to occupy a passing lane may be determined from several consecutive frame images. It should be noted that the several consecutive frame images are several frame images adjacent to the first image, and may include the first image.
In the embodiment of the invention, the emergency lane occupied by the vehicle means that the vehicle does not occupy the emergency lane for driving by special vehicles (such as police cars, ambulances and the like). Occupying an emergency lane may be understood as a violation of a road object traveling in the emergency lane. In an emergency situation, the vehicle may be driving on an emergency lane or parked. However, when the road object has no special reason to drive in the emergency lane, the road object can be regarded as the road behavior occupying the emergency lane. The constraint information of the emergency lane occupied by the vehicle refers to constraint information of the non-special vehicle running in the emergency lane. When the emergency lane constraint information is matched with the vehicle occupation emergency lane constraint information, if the object is a car, the lane where the object is located is determined to be an emergency lane, and the object runs on the emergency lane, the emergency lane constraint information is determined to be matched with the vehicle occupation emergency lane constraint information. It should be noted that, in the embodiment of the present invention, whether the object is driving or stopping on the emergency lane may be determined according to several continuous frame images.
In the embodiment of the invention, the solid pressing line is the driving behavior of the solid line part in the lane pressing line of the tire during the running process of the vehicle. The solid line here can be understood as the solid line part of the lane line in the road. It should be noted that the solid lines in the embodiment of the present invention do not include the solid lines in the one-way lane change permission line. The one-way allowable lane change line is composed of a group of parallel broken lines and solid lines, and the vehicle runs on the one-way allowable lane change line, so that the behavior of pressing the one-way allowable lane change line in a short time is allowed. The solid line pressing constraint information refers to constraint information of a solid line pressing by a non-specific vehicle. When the object is matched with the compaction line constraint information, if the object is a car, the car is determined to be on a certain lane line, namely, a point is on a polygon formed by the lane line, and if the lane line is a solid line, the car is matched with the compaction line constraint information. If the lane line is a dashed line, the lane line does not match the compaction line constraint information.
In the embodiment of the invention, the constraint information of the bus lane occupied by the vehicle refers to the constraint information of the non-bus running in the bus lane, and when the constraint information is matched with the constraint information of the bus lane occupied by the vehicle, if the object is a car, the lane where the car is located is determined to be the bus lane and the service time of the bus lane is met, the constraint information is determined to be matched with the constraint information of the bus lane occupied by the vehicle.
According to the embodiment of the present invention, matching violation constraint information according to the information of the object and the road data, and obtaining a matching result, the method may further include: determining a lane where the object is located, a lane line closest to the object and a distance between the object and the lane line closest to the object according to the information of the object and road data; and matching violation constraint information according to the lane where the object is located, the lane line closest to the object and the distance between the object and the lane line closest to the object, and obtaining a matching result.
It is understood that when the vehicle is close to the lane line, the actual driving situation of the vehicle in the lane cannot be accurately determined. For example, when the vehicle is at a short distance from the lane line, the center point of the vehicle is located within the lane, but in reality the vehicle may travel on a push (lane) line. At this time, the violation constraint information is only matched according to the lane where the road object is located, and some road behaviors cannot be detected, so that the accuracy and the practicability of the road violation detection are reduced. And the lane where the object is located is determined, the lane line closest to the object and the distance between the lane line and the object are determined, and then violation constraint information matching is performed, so that the violation constraint information matching is performed conveniently according to the actual road violation of the road object, the accuracy of the matching result of the violation constraint information is improved, and the phenomenon of missing matching is prevented.
In the embodiment of the present invention, according to the information of the object and the road data, before or after the lane where the object is located is determined by an algorithm that determines whether a point is within a polygon, a lane line closest to the object and a distance between the object and the lane line closest to the object may be further determined according to the position information of the object and the position information of the lane line of the lane where the object is located. The position coordinate of the object may be a center point coordinate of the object, that is, x and y in a position coordinate (x, y, w, h) formula obtained by performing Bounding Box regression processing on the first image. The position information of the lane line of the lane where the object is located refers to the vertex position coordinates of the lane line constituting the lane.
For example, the coordinates of the center point of a certain object are (x, y), and the lane lines corresponding to the detected lane are line1, namely, the vertex coordinates [ (x1, y1), (x2, y2), and solid lines ]; line2, vertex coordinates [ (x3, y3), (x4, y4), solid line ], the distances from the object to the line1 and the line2 can be obtained by calculating the distances from the points to the straight line, respectively, and the minimum distance is selected from the two as the lane line closest to the object, and the minimum distance is the distance between the object and the lane line closest to the object.
According to the lane where the object is located, the lane line closest to the object and the distance between the object and the lane line closest to the object, the vehicle-occupied overtaking lane constraint information, the vehicle-occupied emergency lane constraint information, the vehicle-occupied bus lane constraint information and the ultra-high speed and ultra-low speed constraint information in the violation constraint information can be matched to obtain the matching result of the violation constraint information.
According to the embodiment of the invention, when the violation constraint information is compaction line constraint information, if the lane line closest to the object is a solid line and the distance between the object and the lane line closest to the object is within a first preset range, the matching result is obtained as successful matching.
It is to be understood that the first preset range may be understood as an allowable separation distance between the road object and the nearest lane line preset in the compacted-line constraint information. If the distance between the road object (the center point of the object) and the closest lane line is within the first preset range, the road object is in a solid line driving state. At this time, a matching result successfully matched with the compaction line constraint information is obtained. If the distance between the road object and the lane line is out of the first preset range, the road object is far away from the lane line, namely, the road object is in a non-solid line pressing driving state. At this time, a matching result which is not matched with the compaction line constraint information or fails in matching is obtained. In addition, when the distance between the road object and the lane line is outside the first preset range, although the road object is in the driving state of the non-solid line, it cannot be said that the road object does not violate the behavior of the other road than the compacted line.
It should be noted that, when matching the solid line pressing constraint information according to the road behavior of the road object, it is first necessary to determine whether the lane line corresponding to the road where the vehicle is located in the road is a solid line. That is, information of each lane line in the road data formed by the processed image. The lane line includes a solid line portion and a broken line portion. The object in the road can make corresponding lane change in the dotted line area, and the driving behavior of the object in the road does not violate the road traffic regulation.
It can be understood that the position coordinates of the object in the image are determined to be the coordinates of the center point of the object, which can be obtained by performing Bounding Box regression processing on the image to obtain x and y in the formula (x, y, w, h), where w and h are the width and height of the quadrangle where the object is located, respectively. The quadrangle where the object is located is a quadrangle which is obtained by processing the unmanned aerial vehicle image through a Bounding Box and identifies the area where the road object is located. I.e. a quadrilateral for framing objects.
In an embodiment of the present invention, the first preset range may be obtained according to a width in the obtained coordinates of the object, that is, w in the formula (x, y, w, h), and the first preset range may be set to be not more than a width w/2 of a center point of the object and a width of the framed quadrangle in the lateral direction.
In the embodiment of the present invention, the first preset range may also be obtained according to the obtained category attribute of the road object. Since the width of each type of vehicle is different, the correspondence relationship between the first preset range and the vehicle for each type can be presetAnd after the type of the object is obtained, searching a first preset range corresponding to the type from the corresponding relation. For example, if the type of the acquired object is a bus, the corresponding first preset range is acquired from the corresponding relationship and is L01If the type of the obtained object is a car, obtaining a first preset range L corresponding to the object from the corresponding relation02,L01Greater than L02
In the embodiment of the present invention, the first preset range may also be manually set.
According to the embodiment of the invention, when the violation constraint information is constraint information of an emergency lane occupied by a vehicle, if the object is located outside the lane, the distance between the object and the inner lane line of the emergency lane is the shortest, and the distance between the object and the inner lane line of the emergency lane is within a second preset range, the matching result is obtained as successful matching.
It can be understood that, after determining the lane where the object is located, the lane line closest to the object, and the lane where the object is located in the distance between the object and the lane line closest to the object, the matching result with the vehicle occupation emergency lane constraint information may be obtained according to the matching between the lane where the object is located and the vehicle occupation emergency lane constraint information. And if the lane where the object is located is determined to be the emergency lane, acquiring a matching result successfully matched with the constraint information of the emergency lane occupied by the vehicle.
In practical application, because emergent lane outside (keep away from vehicle driving lane one side) does not have obvious boundary line, or its boundary line is sheltered from by articles such as sound-proof wall, railing, can lead to the unable emergent lane outside boundary information that obtains according to the image that unmanned aerial vehicle snatched. That is, it cannot be determined whether the vehicle is traveling in the emergency lane. Therefore, when the boundary line of the emergency lane in the image acquired by the unmanned aerial vehicle is blocked and the lane type cannot be determined, and the constraint information of the emergency lane occupied by the vehicle is matched, the lane line on the inner side of the emergency lane can be used as a judgment standard. If the object is located outside the lane, the distance between the vehicle and the lane line on the inner side of the emergency lane is calculated, and whether the distance is within a second preset range is judged, so that whether the actual road behavior of the vehicle is consistent with the constraint information of the emergency lane occupied by the vehicle can be determined.
The second preset range may be understood as a preset emergency lane determination section occupied by the vehicle. The value interval of the second preset range can be set according to actual conditions, for example, the second preset range is set to be the width of an emergency lane. If the object is located outside the lane and the distance between the vehicle in the image and the lane line on the inner side of the emergency lane is within a second preset range, the vehicle is indicated to run in the emergency lane, at the moment, a violation constraint information matching result of the emergency lane occupied by the vehicle is obtained, and violation of the object can be determined, namely, the vehicle is judged to occupy the emergency lane.
Specifically, by an algorithm that determines whether a point is within a polygon, a specific lane in which the object is located in the first image can be determined, thereby determining whether the object is located outside the lane. When the object is located outside the lane, the distance of the vehicle in the resulting image from the lane line inside the emergency lane is calculated. By comparing the distance with the second preset range, whether the object occupies the emergency lane can be judged.
It should be noted that, according to the lane distribution information and the information of each lane line in the road data, it can be determined whether the outermost lane is an emergency lane, and if the outermost lane is an emergency lane and the object is located outside the lane, that is, the vehicle is not located in any lane and on the lane, and the distance between the vehicle and the lane line inside the emergency lane is calculated to be within the second preset range, the distance is matched with the emergency lane occupation constraint information of the vehicle.
And S240, when the object is determined to be in violation according to the matching result, controlling the pod to track the object and shoot a second image.
Specifically, when the object is determined to be in violation according to the matching result, the pod is controlled to track the object and take a second image. The vehicle matched with the violation constraint information, namely the speed of the vehicle running during the road running process, the lane road running and the violation constraint information are matched. A pedestrian matched to the violation constraint information, that is, a pedestrian traveling on a non-pedestrian road is determined to be a match. And when the matching result shows that the object in the image is illegal, the pod is controlled to track the object and take a second image.
It should be noted that the pod is controlled to track the object and take a second image, and the attitude angle and zoom factor of the pod can be adjusted, and the flight attitude of the drone and the zoom factor of the pod can also be adjusted. In a specific practical application scenario of the present application, it is preferable to control the pod to track the object and capture the second image by adjusting the attitude angle and the zoom magnification value of the pod, without changing the flight attitude of the unmanned aerial vehicle when cruising on the road. It should also be noted that controlling the pod to track the object and take the second image allows for the attitude angle of the pod and the camera zoom factor to be adjusted to track the object and take the second image, and optionally also allows for the attitude of the drone and the camera zoom factor to be adjusted. Of course, the combination of the flight attitude of the unmanned aerial vehicle and the attitude angle of the pod and the zoom factor value of the camera can also be adjusted to track the object and take a second image.
In the practical application scenario of the application, when the unmanned aerial vehicle patrols the road according to the cruise strategy (the original path thereof), and when the vehicle violating the road exists in the acquired first image, the unmanned aerial vehicle adjusts the attitude angle of the pod and the zooming value of the camera, so as to shoot a second image of the vehicle. If this vehicle slows down, then unmanned aerial vehicle's nacelle adjustment attitude angle and zoom multiple to guarantee that this vehicle is located the field of vision scope all the time. If the vehicle changes the moving track to turn around or turn, the pod of the unmanned aerial vehicle rotates by the attitude angle to continue shooting. It is to be understood that the specific embodiment of the drone tracking an offending object described herein is not to be considered as limiting the scope of the present application.
It is understood that the definition of the second image should be much higher than that of the first image, considering that the second image is information to be recognized in the road violation recognition. It can also be appreciated that in controlling the pod to track the object and take the second image, the drone still follows the cruise strategy (its original path) for road cruise, so that the drone can have similar cruise power consumption each time a cruise mission is performed, facilitating management of the drone.
Further, in the specific embodiment provided in the present application, the controlling pod tracks the object and takes a second image, specifically including: controlling a pod to lock the object so that the object is video-centric; controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor; controlling the unmanned aerial vehicle to shoot a second image under the second zoom multiple; the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
When the pod is controlled to lock the object so that the object is in the center of the video, the pixel coordinates of the object can be acquired in real time and sent to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates to enable the object to be in the center of the video.
Specifically, when an object violation is determined, the TX2 module acquires the pixel coordinate of the object for the first time, then when the unmanned aerial vehicle continuously navigates along the original path, the camera in the pod of the unmanned aerial vehicle captures the image of the object in the road violation in real time, the TX2 module calculates the pixel coordinate of the object in the captured video frames in real time, and sends the pixel coordinate in each video frame to the pod in real time, and the pod adjusts the attitude angle of the pod so that the violation object is in the center of the video.
When the TX2 module is matched with an object with violation, the coordinates of the position coordinates (x, y, w, h) of the road object obtained through Bounding Box regression processing when the object is determined to be in violation are sent to the pod, and the coordinates (x, y) are mainly utilized. The pod can control the center of the lens arranged in the pod by adjusting the attitude of the pod, and after receiving the position coordinate sent by the TX2 module, the pod adjusts the attitude of the pod according to the position coordinate so as to move the lens center to the position coordinate.
It should be noted that the pod can adjust the lens center (picture center) of the image in the video captured by the pod by controlling the attitude angle of the pod, and after receiving the pixel coordinate of the object sent by the TX2 module in real time, the pod calculates the coordinate difference between the lens center in the current attitude and the pixel coordinate, and then adjusts the attitude angle of the pod according to the coordinate difference to adjust the lens center to the pixel coordinate.
In the embodiment of the invention, when the TX2 module judges that an illegal object exists in the first image, the pixel coordinates of the object in each frame of image are determined in real time, and then each pixel coordinate of the determined object is sent to the pod, and the pod makes the object in the center of the video by adjusting the attitude angle of the pod (namely, the center of the picture is adjusted to the pixel coordinates of the object). The attitude angle of the pod to be adjusted comprises not only a pitch angle but also a course angle, and since the first zoom multiple is smaller when the first image is acquired, a very comprehensive road image can be shot, and an object is not required to be positioned in the center of a picture, only the pitch angle is adjusted to a target pitch angle, and when the second image is shot, the object is required to be positioned in the center of the picture, and only the pitch angle cannot be adjusted to achieve the effect. For example, if the coordinates of the center of the screen are (100 ), and the coordinates of the pixel of the object a in the first image frame determined by the TX2 module are (101,102), the coordinates of the pixel are sent to the pod, and at this time, the pod adjusts the attitude angle to move the center of the screen of the first image toward the object a, so that the object is positioned at the center of the screen. At the moment, the pod continuously sends shot images to the TX2 module, and the TX2 module sends pixel coordinates of the object A to the pod after determining the pixel coordinates in each frame, so that the pod continuously adjusts the attitude angle to enable the object A to be always in the center of the picture.
It should be noted that, in fact, when the pod receives the pixel coordinates returned by the TX2 module after sending the first image to the TX2 module, the image captured by the pod is delayed from the first image, the picture center coordinates of the current image are adjusted to the pixel coordinates according to the pixel coordinates of the object determined in the first image, and the current image is sent to the TX2 module, so that the TX2 module controls the pod to lock the object in real time, and the object is located in the center of the video captured by the pod.
In addition, when the first pattern and the second pattern are acquired, the roll angle of the nacelle is always constant, and the roll angle at the initial time is maintained.
It will be appreciated that while the drone control pod locks the object so that it is video-centric, the drone continues to travel on the original path, with the object being video-centric only by controlling the pod. Naturally, the unmanned aerial vehicle can control the pod to lock the object so as to enable the object to be in the video center, and the pod can lock the object by simultaneously adjusting the flight attitude of the unmanned aerial vehicle and the attitude of the pod, so as to enable the object to be in the video center. It will also be appreciated that the adjustment of the flight attitude of the drone, i.e. the adjustment of the cruise strategy of the drone, in a preferred embodiment provided by the present application, the method of locking the object in the video center is preferably implemented by the drone controlling the pod to adjust the attitude angle of the pod.
In the embodiment of the invention, when the pod is controlled to lock the object so that the object is in the center of the video, a second zoom multiple can be obtained and sent to the pod, so that the pod can be zoomed from the first zoom multiple to the second zoom multiple at a constant speed.
In the embodiment of the present invention, the second zoom factor ranges from 20 to 60 times. And in the second zoom multiple range, a second image with clear view and moderate illegal object occupation proportion can be obtained.
It should be noted that the second zoom factor obtained by the TX2 module may be set by comprehensively analyzing the flight height of the drone and the distance between the drone and the offending object, for example, when the flight height of the drone is higher and the distance between the drone and the offending object is farther, the second zoom factor should be set to be larger. When the second zoom factor is a preset fixed value, when the TX2 module detects an object violation, the fixed value is sent to the pod so as to make the pod zoom from the first zoom factor to the second zoom factor at a constant speed. In addition, the object size (i.e., (w, h) in the position coordinates (x, y, w, h) of the road object obtained through Bounding Box regression processing when the object is determined to be illegal) and the corresponding relationship between the image size and the zoom multiple can be preset, so that the corresponding zoom multiple can be determined according to the determined object size.
In the embodiment of the invention, the device can be provided with
Figure BDA0003461218130000171
It should be noted that the TX2 module only sends the second zoom factor once to the pod, and does not need to send it in real time as the pixel coordinates of the determined object.
It should be noted that the pod is controlled to zoom from the first zoom multiple to the second zoom multiple at a constant speed, so as to avoid tracking loss caused by sudden change of the zoom multiple of the pod, and to improve stability and accuracy of the pod when the pod adjusts the attitude angle of the pod to make the object in the center of the video. The uniform zooming of the camera is executed in the cruise process of the unmanned aerial vehicle according to the cruise path, so that the unmanned aerial vehicle is ensured to run according to the cruise path, the cruise speed and the cruise direction, and the uniform zooming process of the camera is also considered. When the unmanned aerial vehicle cruising speed was too fast, the camera was zoomed at the uniform velocity to the process of predetermineeing the second and zooming the multiple and is set for at the faster uniform velocity and zoom. Conversely, the process of zooming the camera to the preset second zooming multiple at the constant speed is set as the slower constant zooming. The process that the camera zooms to the preset second zoom multiple at the constant speed is set to be slower or faster, so that the problem that the illegal object is possibly lost in the visual field range of the camera when the illegal object on the road is shot is also considered.
It should be noted that in the process of controlling the pod to zoom from the first zoom factor to the second zoom factor at a constant speed, the zooming action is executed by the pod, and the unmanned aerial vehicle only needs to continue cruising according to the cruising path.
It should also be noted that, in the process of zooming the pod from the first zoom factor to the second zoom factor at the constant speed, the zoom value of the constant speed zoom is set according to the practical application scenario, for example, when the flying height of the unmanned aerial vehicle is 50 meters away from the ground, the zoom value of the constant speed zoom is set to be 1-3 times of the first zoom factor. It can be understood that, in the process of zooming the camera from the first zoom multiple to the preset first zoom multiple at the constant speed, the value of the zoom multiple at the constant speed obviously does not constitute a limitation to the specific protection scope of the present application.
In the embodiment of the invention, the pod preferably adjusts the attitude angle according to the pixel coordinates to enable the object to be positioned in the center of the video, and then adjusts the zoom multiple to prevent the tracking loss in the zooming process. That is, during zooming, the target object is kept at the center of the video, and when zooming is performed to the second zoom multiple, the second image is captured. Wherein the first zoom factor is less than the second zoom factor.
It should be noted that after the second image is taken, the unmanned aerial vehicle controls the attitude angle of the pod to return to the target pitch angle, and the zoom multiple of the pod is adjusted to the first zoom multiple from the second zoom multiple, so that the unmanned aerial vehicle can continuously cruise according to the original flight path.
S250: and acquiring the identification of the object according to the second image.
It is understood that the second image taken by the drone is an image for the offending object. In this way, from the second image, the relevant information of the offending object can be determined. For example, the violation object category attribute information, the violation object feature information, the road information where the violation object is located, the violation object location information, and so on. In the embodiment of the invention, the identification information of the object, namely the specific distinctive information of the object, is mainly acquired. Such as the license plate number of the vehicle, the face image of a pedestrian, etc.
It is noted that although, from the first image taken by the drone, it is also possible to determine the identity of the offending object. However, the zoom multiple of the first image shot by the unmanned aerial vehicle is lower than that of the second image shot by the unmanned aerial vehicle, and the shooting angle of the illegal object in the second image is better, so that the identification of the illegal object in the second image is more accurate and is easy to obtain, and therefore the identification of the object is obtained through the second image.
Further, in a preferred embodiment provided by the present application, acquiring the identifier of the object according to the second image specifically includes: detecting the object image from the second image according to a plurality of types of detection models; detecting a license plate image from the object image according to the license plate detection model; and acquiring the identification of the object from the license plate image by using an Optical Character Recognition (OCR) algorithm.
Specifically, when the unmanned aerial vehicle tracks the illegal object, the situation that the tracked object is lost exists because the illegal object is in the process of continuously moving, so that the photographed second image may not contain the illegal object, the proportion of the illegal object in the second image is too small, and the second image may contain a plurality of objects, and therefore, the license plate cannot be identified or cannot be identified accurately due to direct identification of the second image. Therefore, the identification of the illegal object is obtained according to the second image, and firstly, the object image needs to be detected from the second image according to the multi-class detection model and is used for recognition, so that the possibility of recognizing the license plate is increased, and the recognition accuracy is improved. Wherein the multi-class detection model is used for detecting the object image from the second image. The multi-class detection model can be obtained through negative feedback optimization of a neural network. And performing negative feedback optimization of the multi-class detection model, wherein a public image set for training is required. And the public image set for training at least comprises a plurality of image elements which are not marked with classification results and contain illegal objects. In this way, the classification detection model obtained through negative feedback optimization can identify whether the second image contains the illegal object. The category attribute of the violation object can be an automotive vehicle, a non-automotive vehicle, a pedestrian, and the like.
After the object image is obtained, due to the problem of the shooting angle of the unmanned aerial vehicle, some object images may not include the license plate image, or the license plate image has a small occupation ratio in the object image, so that the license plate image needs to be detected from the object image. In the embodiment of the invention, the license plate image can be further detected from the object image through the license plate detection model, so that the possibility of recognizing the license plate is increased, and the recognition accuracy is improved. And detecting the illegal object identification through a license plate detection model, wherein the detection of a license plate position area is mainly performed in an object image to obtain the image information of the license plate position area in a second image. The license plate detection model can be obtained through negative feedback optimization of a neural network. And performing negative feedback optimization on the license plate detection model, wherein a public image set for training is required. The public image set for training at least comprises a plurality of image elements marked with license plate positions. Thus, the license plate detection model obtained through negative feedback optimization can detect the license plate position in the second image containing the illegal object. The image elements with the marked license plate positions can be motor vehicle image elements with the marked license plate positions, non-motor vehicle image elements with the marked license plate positions and the like.
It can be understood that, for the skilled person, the license plate detection model is mostly used for detecting the license plate of the parking lot, and the license plate detection model is used for detecting the license plate of the target object running on the road. The license plate detection model is obtained by training a plurality of license plate images detected by an unmanned aerial vehicle in a flight state, and then an initial license plate detection model is trained. The position of the license plate is identified through an initial vehicle detection model.
It should also be noted that the license plate detection model is obtained through an algorithm model, the algorithm model comprises two parts, one part is a network structure, and particularly means matrix calculation in a convolutional layer and a pooling layer; the other part is a weight file, which can be understood as an array consisting of many floating point numbers. It is also contemplated that in a preferred embodiment provided herein, the algorithm model accelerates the computation of the algorithm model by a Graphics Processing Unit (GPU), i.e. the computation of the algorithm model is accelerated. Because the graphic processor can simultaneously process a plurality of task calculations, compared with the existing processor, the image processor has the advantage of outstanding calculation, and the time consumption of the algorithm model adopted by the license plate monitoring model can be reduced to 10ms from the original 300 ms.
And after detecting the region of the license plate in the second image, identifying the license plate information. Specifically, an OCR algorithm is utilized. And performing OCR recognition on the image of the license plate position area to obtain license plate number information in the license plate. The OCR algorithm can be obtained through negative feedback optimization of a neural network. Negative feedback optimization of the OCR algorithm requires the use of a training public image set. The public image set for training at least comprises a plurality of license plate image elements marked with license plate information. Thus, the OCR algorithm obtained through negative feedback optimization can detect the license plate information in the second image. The image elements with the marked license plate positions can be motor vehicle image elements with the marked license plate positions, non-motor vehicle image elements with the marked license plate positions and the like.
It can also be understood that the license plate detection model mainly adopts a target detection technology in a convolutional neural network in the embodiment of the application, and an OCR algorithm uses an image classification technology. After the license plate detection model detects the license plate, 4 coordinate points (upper left corner, lower left corner, upper right corner and lower right corner) of the license plate are generated. The OCR algorithm outputs a one-dimensional vector corresponding to a number, letter, or chinese character in the list of license plate characters.
Further, in a specific embodiment provided in the present application, acquiring the identifier of the object according to the second image specifically includes: periodically shooting a plurality of second images under a second zoom multiple; detecting an object image from each second image according to the multi-class detection model; detecting a license plate image from each object image according to a license plate detection model; recognizing the license plate identifier of each license plate image by using an Optical Character Recognition (OCR) algorithm; calculating the repetition rate of each license plate identifier according to the license plate identifiers of the plurality of second images; comparing the repetition rate with a set threshold; and determining the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate as the identifier of the object.
Specifically, under the second zoom factor, a plurality of second images are acquired, that is, a plurality of second images are periodically taken. The second image is used for detecting identification information of the illegal vehicle information, so that a plurality of second images should be taken as many as possible to confirm the information of the illegal vehicle.
It should be noted that the acquisition of the plurality of second images may be continuous several frames or interval shooting. In the process of carrying out continuous shooting of a plurality of frames, the setting is needed to be carried out according to the actual shooting performance of the unmanned aerial vehicle. For example, a road recognition algorithm in an unmanned aerial vehicle needs 100ms to process one frame, an original video stream is 30Hz, and a vehicle can leave 90 frames in an image, i.e., the vehicle can present an image within 3 seconds. In an actual scene, the number of frames is usually set to 15 frames, that is, the vehicle presents 6 seconds of images, in consideration of the sufficiency of violation forensics.
It can be understood that the periodically capturing the plurality of second images may be started after a single image is captured and specific information of the vehicle is not obtained through the multiple types of detection models and the license plate detection model. The plurality of second images can be shot periodically by shooting the images every 1 second, the vehicle information detected by the plurality of types of detection models and the license plate detection models is carried out on the plurality of images, an Optical Character Recognition (OCR) algorithm is carried out, the mark with the high repetition rate of the generated license plate information is selected, and then the specific information of the vehicle is generated.
It is understood that the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate is determined as the identifier of the object, that is, the identifier with the repetition rate exceeding a predetermined threshold is selected after a plurality of pictures are taken. For example, if 6 images are periodically captured and 3 or more pieces of license plate information are repeated, 3 or more pieces of license plate information are selected and selected. It will be understood that the time of periodically taking the images, the value of the repetition rate, the predetermined threshold value and the method of selecting the identifier obviously do not constitute a limitation to the scope of the present application.
Further, in the specific implementation provided in the present application, the first image may also be saved as evidence.
It is understood that from the first image taken by the drone, road-related information in the image and object information located in the road can be determined. The road-related information refers to information related to road distribution. For example, details of the distribution of motor lanes, non-motor lanes, sidewalks in the road, or details of the distribution of lanes in the motor lanes, or the specific location coordinates of each road in the image, etc. The object information is an object located in a road in the image. For example, pedestrians on sidewalks, or non-motorized vehicles on non-motorized lanes, or motorized vehicles on motorized lanes, or category attributes of different objects respectively located in each road, etc. And identifying the illegal object according to the road related information and the object information positioned in the road and matching with the illegal constraint information. However, there is a possibility of erroneous determination. Therefore, the first image is required to be saved as evidence. Therefore, secondary recognition can be carried out on the road behaviors which are possibly misjudged according to the stored evidence, and the accuracy of road recognition is increased.
It should be noted that the second image may also be saved as an evidence to further save the license plate information including the illegal object, so that the evidence is more complete and clear.
Referring to fig. 5, the present application discloses a device 20 for identifying road violations, which can be used to execute the method for identifying road violations in the above embodiments of the present application. The apparatus 20 comprises:
the acquisition module 21 is used for acquiring a first image captured when the unmanned aerial vehicle is cruising on a road;
a processing module 22 for processing the first image forming road data and information of an object located on a road;
the matching module 23 is configured to match violation constraint information according to the information of the object and the road data, and obtain a matching result;
the control module 24 is used for controlling the pod to track the object and shoot a second image when the object violation is determined according to the matching result;
and the identification module 25 is configured to obtain an identifier of the object according to the second image.
According to the embodiment of the invention, the obtaining module 21 is configured to control a ground pitch angle of a pod of the unmanned aerial vehicle to be a target pitch angle and control a zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flight height of the unmanned aerial vehicle; keeping the target pitch angle and the first zoom multiple, and acquiring a first image in a video shot by the unmanned aerial vehicle when cruising on a road; the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
According to the embodiment of the invention, the obtaining module 21 is configured to obtain an image captured when the unmanned aerial vehicle cruises the road according to a preset cruise strategy; the preset cruise strategy comprises at least one of preset cruise routes, cruise speeds and cruise directions.
According to an embodiment of the invention, the target pitch angle is in the range-20 ° to-40 °.
According to an embodiment of the present invention, the processing module 22 is configured to determine lane lines and vertices of each lane line in the image; generating road data according to the lane lines, the vertexes of the lane lines and the road distribution use data corresponding to the image; the road data at least includes lane distribution information in the road and information of each lane line constituting the road.
According to an embodiment of the invention, the processing module 22 is configured to process the image to form a category attribute and a position coordinate of the object located on the road.
According to the embodiment of the invention, the violation constraint information comprises at least one of constraint information of overtaking lane occupied by a large vehicle, constraint information of emergency lane occupied by the vehicle, constraint information of bus-dedicated lane occupied by the vehicle, constraint information of compaction line and constraint information of ultra-high speed and ultra-low speed of the vehicle.
According to the embodiment of the present invention, the matching module 23 is configured to determine the lane where the object is located according to the information of the object and the road data; and matching violation constraint information according to the lane where the object is located, and obtaining a matching result.
According to the embodiment of the present invention, when the violation constraint information is vehicle ultra-high speed and ultra-low speed constraint information, the matching module 23 is configured to calculate the moving speed of the object according to information of the object formed by a plurality of continuous frames of images and road data; determining a lane where the object is located according to the information of the object and the road data; and matching the moving speed of the object with the speed limit value interval of the lane to obtain a matching result.
According to the embodiment of the present invention, the matching module 23 is configured to determine the lane where the object is located, the lane line closest to the object, and the distance between the object and the lane line closest to the object according to the information of the object and the road data; and matching violation constraint information according to the lane where the object is located, the lane line closest to the object and the distance between the object and the lane line closest to the object, and obtaining a matching result.
According to the embodiment of the present invention, when the violation constraint information is compaction line constraint information, the matching module 23 is configured to obtain a matching result as a successful matching if the lane line closest to the object is a solid line and the distance between the object and the lane line closest to the object is within a first preset range.
According to the embodiment of the present invention, when the violation constraint information is constraint information that the vehicle occupies an emergency lane, the matching module 23 is configured to obtain a matching result as a successful matching if it is obtained that the object is located outside the lane, the distance between the object and an inner lane line of the emergency lane is closest, and the distance between the object and the inner lane line of the emergency lane is within a second preset range.
According to an embodiment of the invention, the control module 24 is configured to control the pod to lock the object so that the object is video-centric; controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor; controlling the unmanned aerial vehicle to shoot a second image under the second zoom multiple; the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
According to the embodiment of the invention, the pixel coordinates of the object are acquired in real time and are sent to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates, and the object is positioned in the center of the video.
According to the embodiment of the invention, a second zoom multiple is obtained and sent to the pod, so that the pod zooms from the first zoom multiple to the second zoom multiple at a constant speed.
According to an embodiment of the invention, the second zoom factor ranges from 20 to 60 times.
According to an embodiment of the present invention, the recognition module 25 is configured to detect the object image from the second image according to a multi-class detection model; detecting a license plate image from the object image according to the license plate detection model; and acquiring the identification of the object from the license plate image by using an Optical Character Recognition (OCR) algorithm.
According to the embodiment of the present invention, the recognition module 25 periodically captures a plurality of second images at a second zoom magnification; detecting an object image from each second image according to the multi-class detection model; detecting a license plate image from each object image according to a license plate detection model; recognizing the license plate identifier of each license plate image by using an Optical Character Recognition (OCR) algorithm; calculating the repetition rate of each license plate identifier according to the license plate identifiers of the plurality of second images; comparing the repetition rate with a set threshold; and determining the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate as the identifier of the object.
In the embodiment of the invention, a first image captured when an unmanned aerial vehicle cruises a road is obtained; processing the first image formation road data and information of an object located on a road; matching violation constraint information according to the information of the object and the road data, and obtaining a matching result; when the object is determined to be in violation according to the matching result, controlling a pod to track the object and take a second image; and acquiring the identification of the object according to the second image, flexibly monitoring the passing condition of the continuous road in the unmanned aerial vehicle cruising route range in real time, identifying whether the monitored area has road violation behaviors, expanding the identification area of the road violation driving behaviors and obtaining evidence of the object with the road violation behaviors.
As shown in fig. 6, an electronic device 30 according to an embodiment of the present application is described. The electronic device 30 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 30 is in the form of a general purpose computing device. The components of the electronic device 30 may include, but are not limited to: the at least one processing unit 310, the at least one memory unit 320, and a bus 330 that couples various system components including the memory unit 320 and the processing unit 310.
Wherein the storage unit stores program code executable by the processing unit 310 to cause the processing unit 310 to perform steps according to various exemplary embodiments of the present invention described in the description part of the above exemplary methods of the present specification. For example, the processing unit 310 may perform the various steps as shown in fig. 1.
The storage unit 320 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)3201 and/or a cache memory unit 3202, and may further include a read only memory unit (ROM) 3203.
The storage unit 320 may also include a program/utility 3204 having a set (at least one) of program modules 3205, such program modules 3205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 330 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 30 may also communicate with one or more external devices 400 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 30, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 30 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 350. An input/output (I/O) interface 350 is connected to the display unit 340. Also, the electronic device 30 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 360. As shown, the network adapter 360 communicates with the other modules of the electronic device 30 via the bus 330. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 30, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the statement that there is an element defined as "comprising" … … does not exclude the presence of other like elements in the process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A method of road violation identification, comprising the steps of:
acquiring a first image captured when the unmanned aerial vehicle cruises a road;
processing the first image formation road data and information of an object located on a road;
matching violation constraint information according to the information of the object and the road data, and obtaining a matching result;
when the object is determined to be in violation according to the matching result, controlling a pod to track the object and take a second image;
and acquiring the identification of the object according to the second image.
2. The method according to claim 1, wherein the acquiring of the first image captured when the unmanned aerial vehicle is cruising on the road specifically comprises:
controlling a ground pitch angle of a pod of the unmanned aerial vehicle to be a target pitch angle and controlling a zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flying height of the unmanned aerial vehicle;
keeping the target pitch angle and the first zoom multiple, and acquiring a first image in a video captured when the unmanned aerial vehicle cruises on a road;
the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
3. The method of claim 2, wherein the target pitch angle ranges from-20 ° to-40 °.
4. The method of claim 1, wherein the control pod tracks the object and takes a second image, in particular comprising:
controlling a pod to lock the object so that the object is video-centric;
controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor;
controlling the unmanned aerial vehicle to shoot a second image under the second zoom multiple;
the pod of the unmanned aerial vehicle is provided with a camera along or deviating from the heading of the unmanned aerial vehicle.
5. The method of claim 4, wherein controlling a pod to lock the object so that the object is centered in the video comprises:
acquiring pixel coordinates of the object in real time, and sending the pixel coordinates to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates, and the object is positioned in the center of the video.
6. The method of claim 4, wherein controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor comprises:
and acquiring a second zooming multiple, and sending the second zooming multiple to the pod so as to enable the pod to zoom from the first zooming multiple to the second zooming multiple at a constant speed.
7. The method of claim 4, wherein the second zoom factor ranges from 20 to 60 times.
8. The method according to claim 1, wherein obtaining the identifier of the object from the second image specifically comprises:
detecting the object image from the second image according to a plurality of types of detection models;
detecting a license plate image from the object image according to the license plate detection model;
and acquiring the identification of the object from the license plate image by using an Optical Character Recognition (OCR) algorithm.
9. The method according to claim 1, wherein obtaining the identifier of the object from the second image specifically comprises:
periodically shooting a plurality of second images under a second zoom multiple;
detecting an object image from each second image according to the multi-class detection model;
detecting a license plate image from each object image according to a license plate detection model;
recognizing the license plate identifier of each license plate image by using an Optical Character Recognition (OCR) algorithm;
calculating the repetition rate of each license plate identifier according to the license plate identifiers of the plurality of second images;
comparing the repetition rate with a set threshold;
and determining the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate as the identifier of the object.
10. An apparatus for road violation identification, comprising:
the acquisition module is used for acquiring a first image captured when the unmanned aerial vehicle cruises a road;
a processing module for processing the first image forming road data and information of an object located on a road;
the matching module is used for matching violation constraint information according to the information of the object and the road data and obtaining a matching result;
the control module is used for controlling the pod to track the object and shoot a second image when the object is determined to be in violation according to the matching result;
and the identification module is used for acquiring the identification of the object according to the second image.
11. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-10.
12. A storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-10.
CN202210035827.0A 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium Pending CN114387533A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210035827.0A CN114387533A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210035827.0A CN114387533A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114387533A true CN114387533A (en) 2022-04-22

Family

ID=81201037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210035827.0A Pending CN114387533A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114387533A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690767A (en) * 2022-10-26 2023-02-03 北京远度互联科技有限公司 License plate recognition method and device, unmanned aerial vehicle and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN111523464A (en) * 2020-04-23 2020-08-11 上海眼控科技股份有限公司 Method and device for detecting illegal lane change of vehicle
CN111666853A (en) * 2020-05-28 2020-09-15 平安科技(深圳)有限公司 Real-time vehicle violation detection method, device, equipment and storage medium
CN113111876A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Method and system for obtaining evidence of traffic violation
CN113420748A (en) * 2021-08-25 2021-09-21 深圳市城市交通规划设计研究中心股份有限公司 Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN113763719A (en) * 2021-10-13 2021-12-07 深圳联和智慧科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupation detection method and system
CN113887418A (en) * 2021-09-30 2022-01-04 北京百度网讯科技有限公司 Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN111523464A (en) * 2020-04-23 2020-08-11 上海眼控科技股份有限公司 Method and device for detecting illegal lane change of vehicle
CN111666853A (en) * 2020-05-28 2020-09-15 平安科技(深圳)有限公司 Real-time vehicle violation detection method, device, equipment and storage medium
CN113111876A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Method and system for obtaining evidence of traffic violation
CN113420748A (en) * 2021-08-25 2021-09-21 深圳市城市交通规划设计研究中心股份有限公司 Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN113887418A (en) * 2021-09-30 2022-01-04 北京百度网讯科技有限公司 Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN113763719A (en) * 2021-10-13 2021-12-07 深圳联和智慧科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupation detection method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690767A (en) * 2022-10-26 2023-02-03 北京远度互联科技有限公司 License plate recognition method and device, unmanned aerial vehicle and storage medium
CN115690767B (en) * 2022-10-26 2023-08-22 北京远度互联科技有限公司 License plate recognition method, license plate recognition device, unmanned aerial vehicle and storage medium

Similar Documents

Publication Publication Date Title
KR102541561B1 (en) Method of providing information for driving vehicle and apparatus thereof
US11113961B2 (en) Driver behavior monitoring
US11380105B2 (en) Identification and classification of traffic conflicts
CN115014383A (en) Navigation system for a vehicle and method for navigating a vehicle
CN112201051B (en) Unmanned aerial vehicle end road surface vehicle illegal parking detection and evidence obtaining system and method
WO2017123665A1 (en) Driver behavior monitoring
EP4086875A1 (en) Self-driving method and related device
US20220019845A1 (en) Positioning Method and Apparatus
US11914041B2 (en) Detection device and detection system
CN117053814A (en) Navigating a vehicle using an electronic horizon
Llorca et al. Traffic data collection for floating car data enhancement in V2I networks
CN110033622A (en) Violation snap-shooting based on unmanned plane aerial photography technology occupies Emergency Vehicle Lane method
CN114898296A (en) Bus lane occupation detection method based on millimeter wave radar and vision fusion
US20240071100A1 (en) Pipeline Architecture for Road Sign Detection and Evaluation
CN114693540A (en) Image processing method and device and intelligent automobile
CN114387533A (en) Method and device for identifying road violation, electronic equipment and storage medium
CN114373152A (en) Method and device for identifying road violation, electronic equipment and storage medium
CN116524311A (en) Road side perception data processing method and system, storage medium and electronic equipment thereof
US20230126957A1 (en) Systems and methods for determining fault for a vehicle accident
US20230192141A1 (en) Machine learning to detect and address door protruding from vehicle
CN114333339B (en) Deep neural network functional module de-duplication method
US11884268B2 (en) Motion planning in curvilinear coordinates for autonomous vehicles
KR102516890B1 (en) Identification system and method of illegal parking and stopping vehicle numbers using drone images and artificial intelligence technology
CN116745188A (en) Method and system for generating a longitudinal plan for an autonomous vehicle based on the behavior of an uncertain road user
CN114373139A (en) Method, device, electronic equipment and storage medium for identifying road violation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination